Arp objects, QSOs, Statistics

You shall receive bluster foam and smoke, yet answer you shall not have.
Reference to noneistant psosts will be made and claims of your inability to comprehend, but answer you shall have none.
Vague allusions, bad statistics and avioded issue will about but an answer you shall not receive.

BAC:
1.How exactly and in what post did you explain that your method can determine a random from a non-random patterns?

And in case you forgot:

2. What mechanism and forces cause galaxies to rotate per perrat's model of galaxy rotation. What observation has been made to support it.
3. What would keep a Lerner plasmoid of 40,000 solar masses in an area of 43 AU diameter from collapsing to a black hole.
4. What makes star cluster rotate faster around a galaxy than gravity minus dark matter can provide?
5. What part of Narlikar's theory has any observable consequence, other than your gnome of intrinsic red shift?
6. How do you explain the quantity of light elements and heavy elements through PC/PU?
7. How does either Narlikar's theory or other PC/PU provide for the cosmic back ground radiation spectrum?

Seven muses and seven mysteries, you cabn drive a deathstar through the holes in your theory BAC.
 
also re post #308:
BeAChooser said:
But why would one want to apply it to all 104 objects given that only a few of those various type objects are hypothesized as being ejected from that galaxy or any galaxy for that matter? Perhaps you STILL don't understand the nature of the calculations and method?
.
Indeed, I have admitted, more than once (I think) that I do not understand "the nature of the calculations and method".

You see, I took you at your word ("I've stated my hypothesis very clearly ... that the calculations I made (including the Bayesian portion) strongly suggest that the mainstream's explanation for quasars is WRONG", to take just one example), and concluded that your hypothesis is a test of some proposition derivable from some mainstream theory or other.

I admit that the clues have been there, and that you have strongly hinted at them, many times.

This part of #308 was one of several that I read today that finally made the connection for me ...

The hypothesis that you are testing has to do with an idea that '(all) quasars' are ejected '(predominantly) along the minor axes' of 'active galaxies', and have redshifts 'at Karlsson peaks', and the null hypothesis has something to do with random (chance) alignments (or something).

And to test your hypothesis you have scoured the literature for papers by Arp et al. which report observations of quasars along (or near) the minor axes of active galaxies (and also something to do with Karlsson peaks).

Your selection method is principally a 'seek and ye shall find' one - the objective is to find as many 'cases' that seem to support your hypothesis as possible - and apply an a posterori probability calculation to them.

Other things that twigged me to this explanation include:

* no explicit statement of the hypothesis

* no null hypothesis

* repeated misunderstanding (or worse) over the need for consistency (e.g. in how 'quasars' are selected, for any part of the calculation)

* gross parody of 'mainstream theory', as what you thought you were testing

* lack of derivation of core details of hypothesis to be tested from an explicit statement of 'mainstream theory'.
 
Re #312:
BeAChooser said:
Four, you must have misinterpreted L-C&G's paper AGAIN. It's easy enough to check. Suppose each of the galaxies was more than several degrees away from each other so that those 1 degree zones did not overlap. Out to a distance of 1 degree, each galaxy would occupy about 3 square degrees. Since there are 41,250 square degrees in the entire sky, each galaxy would occupy 0.0000727 of the entire sky. So 70 galaxies would occupy 0.0051 of the entire sky. So if they had claimed that there were 8698 quasars within 1 degree of 70 galaxies, then they would have been implying a total number of observable quasars in the sky of 8698/0.0051 = 1,705,000 ... more than 4 times the number that the SDSS study estimates (and it's the best survey yet with a claim that estimate accounts for more than 90% estimate of all observable quasars).
.
Um, ... er, ... I think you goofed here BAC ...

L-C&G did, in fact, count quasars more than once, where the circles around the galaxies overlapped (and there is quite a bit in their paper about why this is legitimate).

I also think your arithmetic and/or reading of a source is wrong ... if only because SDSS DR6 alone has confirmed spectra of >100k 'quasars' (and the spectrascopic area is only some 7k square degrees).

But mostly I think you yourself didn't actually read L-C&G ... Table 1 ("Anisotropy of QSOs with different constraints") lists, in the column "Number of pairs" (meaning, galaxy-QSO pairs) "8698" on the "1.0o" line in the column thetamax .
 
Re #318:
BeAChooser said:
DeiRenDopa said:
I also pointed out that there seem to be at least five other quasars within 30' of NGC 5985 (in addition to the two mentioned in the Arp paper).
Actually, the Arp paper mentioned 5 quasars. And the 5 you mentioned "in addition to the two in the Arp paper", two of those appear to have the same redshift as those in Arp's paper. As to the other 3 quasars, one is outside the 0-3 range and the other two we know nothing else about (like where they are located).
.
I think you missed the "within 30'" part ...

Here's what I wrote in #294:
In terms of radial distance, here is the list, ranked by distance (from the Arp paper):
12.0' (z=2.13 quasar)
25.2' (1.97)
36.9' (0.81, and the dSp)
48.2' (0.69)
54.3' (1.90)
90.4' (0.35, which is the S1)
So there are only two quasars, in the Arp paper, within 30' of NGC 5985.

The two with the 'same' redshift as those in the Arp paper seem to be quite different quasars:

1537+595 NED01 and NED02 are both listed as being 7.5' from NGC 5985, and have redshifts of 2.132 and 1.968 (respectively); the two 'Arp' quasars have redshifts listed as 2.125 (12.0' distant) and 1.968 (25.2') (respectively).

.
I wonder how many [of the five 'new' quasars] lie near the minor axis of NGC 5985?
There aren't 5, only 3, and one of those has a z well outside the 0-3 range of my calculation. As to the location of the other two, feel free to find out for us, DRD. I'll be happy to include them in the calculation although the probability will only end up smaller.
.
There are five, not three, as I just showed.

How about I give you the RA and Dec of all five, and you tell us whether they are sufficiently close to the minor axis to warrant including in your calculations?
 
A suggestion for exploring BeAChooser's approach to hypothesis formation and testing (as presented in this thread).

Let's set up a toy, or mock, universe.

Let's assume it's a flat, 2D universe, with a finite boundary, in the shape of a circle (distance R from the centre of the universe).

Let's assume the universe is populated by two kinds of objects, which we will call AG and Q.

'AG' objects are points with a double-headed arrow.

'Q' objects are also points, with a property called z, which is always an integer in the range 0 to 100 (inclusive).

We are looking at this universe from afar.

Every Q can be related to every AG by two numbers: the distance from AG to Q ('rAG-Q'), and the angle the line AG-Q makes with respect to the AG's double-headed arrow ('βAG-Q'); for convenience, we will express this angle as one between 0o and 90o (inclusive).

Note that we could also make our universe the surface of a sphere, with us at its centre; however, to do many of the calculations we very likely would want to do we'd have to go brush up on spherical trig. That's perfectly OK of course, and if JREF forum members who are still reading this thread would prefer this mock universe, please say so (and also, please, if you'd prefer the 'circle' universe, please say so too!).

As I understand it, BAC's hypotheses and the associated tests have to do with (some of the) Qs which are within a certain distance of an AG, and for which βAG-Q is < some number, and for which the z of these Qs belong to a subset of the set of integers 0 to 100.

I have stated that I do not understand BAC's approach.

I hope that an exploration of this mock universe will help me to understand it.

Before proceeding, I'd like to hear from you, the readers of this post.

Specifically, to what extent do you think exploration of this mock universe will enable you to understand BAC's approach? What vital parts of his approach, if any, will exploration of this mock universe fail to clarify?

Any other comments?
 
I don't know DRD, the BAC method seems to be based upon an even distribution of QSOs.

So if there are three/sq. degree, that means in the model that they are to be distributed so that there are only 3 to a square degree and that they are evenly spaced out.

So if you have 6 apparent in a square degree then the odds of that are like figured from there.

Say that you just say, well there is a 1/3 chance there would be an extra QSO, since there are three per sq. degree that means the odds of an extra QSO are 1/3 and the odds of three extra QSOs are (1/3)3 or 1/27.

At least that is the way the whacky thing looks to me.

Now the stuff about the red shifts is just even stranger.

I sure hope the CDC does not use bacs methods of calculation.

Such as here where the calculation of appearance along the minor axes is made
Now let's complete the calculation by again adding in the fact that all 5 objects are aligned rather narrowly along the minor axis. I'll just use the dart board example I used previously, where I found that the probability of throwing 5 darts in a row that land within a 15 degree zone extending from opposite sides of the center of the dart board as being 3.9 x 10-6 per galaxy. And again, we have to multiply by the number of galaxies with 5 quasars that can be aligned. With only 20,600 such cases possible (conservatively), the probability of finding 5 quasars aligned along the minor axis is therefore 3.9 x 10-6 * 20,600 = 0.08 which makes the total likelihood of encountering this one case if one carefully studied the entire quasar population equal to 0.08 * 0.0063 = ~0.0005 .

Some how we have the ratio of distribution as being

3.9 x 10 -6 for the five QSO to be arranged there

so we take the total number of QSOs and multiply it by that distribution ratio and get 0.0063.

And somehow this goes back to Bayes theorm in post 151
http://www.internationalskeptics.com/forums/showpost.php?p=3594665&postcount=151

which seem to require exclusive sets (which hasn't been demonstrated that you a picking balls froma bag)(seems more like counting raindrops to me)
:http://www.intmath.com/Counting-probability/10_Bayes-theorem.php

but that doesn't seem to bother some people:
http://en.wikipedia.org/wiki/Bayes'_theorem#Derivation_from_conditional_probabilities

I don't know, it seems to be a philosphical movement as much as a part of modern statitics.

This is more what i am used to:
http://en.wikipedia.org/wiki/Observational_error
http://en.wikipedia.org/wiki/Errors_and_residuals_in_statistics
 
Last edited:
DRD,

I think your toy will help.

I hope that BAC will explain his approach using this toy universe, as clearly and succinctly as possible.
 
How can your methdology show a difference between a random placement and a causal one?
... snip ...
Show me where in this thread that you have answered this specific question. Otherwise I will go to the Community Forum and call you a liar in a thread deidicated to that purpose.

David, I need to thank you. In the process of preparing a post to address your question (and show that you haven't even tried to understand my methodology), I discovered that I made a mistake in the formula I used to calculate probability.

I will describe and correct that error in my next post, where I will once again calculate the probability of observing the specific set of high red-shift quasars that have been found near several specific low-redshift galaxies, assuming the mainstream claim that z are continuously distributed over the interval z=0-3 is true.

The error I made doesn't change the conclusion I reached nor the answer I will give to your question. But it is an error which I think someone who works in the field of probability and statistics, like you, probably should have caught. From the fact that you didn't, I can only speculate that either aren't an expert (and mind you, I'm not claiming to be one) or you didn't take the time to try to understand my methodology. :)

In any case, let me now address your question and please note that the answer was right in front of you.

Your question is how can my methodology show a difference between a random placement and a causal one? But to be more precise, your question should be restated: how can my methodology distinguish between the existence of physics which produce a relatively uniform distribution z (like the mainstream model) and physics which produce a highly non-uniform distribution of z (like Karlsson's quantized values)?

I want you to think carefully about what I'm calculating. I find the probability that a specific observation of quasar z values would occur, assuming that the mainstream assumptions about quasar/galaxy associations and quasar redshift (z) distribution are true. More specifically, I'm calculating in each case the probability of encountering an observation with a set of z's that match the Karlsson values within the accuracy that the observation actually matches the Karlsson values, while assuming that the z values are not quantized around Karlsson values but instead come from a relatively uniform distribution of z between 0 and 3.

If the observation exactly matches the Karlsson values then the probability calculated by this method is very, very small. If the observation doesn't match the Karlsson values, then the calculated probability is much, much larger. All the cases I've studied here have produced very small probabilities. And yet each of those cases actually exists. Does that perhaps suggest something?

Can we agree that the low probabilities I've calculated for seeing specific observations with a specific set of z values, assuming a random "placement" of z values, are correct and very small (<<1)? Can we agree that the probability of seeing those observations would still be small (<1) even if a VERY large number of galaxies were examined ... even if all the galaxies that could potentially have quasars near them were examined. That's what my calculations indicate.

Yet in spite of that fact, multiple observations with those low probability z combinations have actually been observed. Doesn't that suggest the placement of z values might not be random? And if the number of galaxies actually examined so far to obtain those observations isn't anywhere near as large as the number of potential galaxies with quasars near them on which that <1 probability is based, doesn't that STRONGLY suggest the placement of z values isn't random? Isn't this obvious, David?

In other words, given a relatively limited number of observations, we simply don't expect to see any cases where all or most of the z values are close to Karlsson values. Frankly, we should be surprised if we do see such a case. But if we see only one, we might think "well, we got lucky". But when we see two, we probably should begin to get a little suspicious. And when we see half a dozen observations which are calculated to be low probability cases, maybe it's time to wake up to the fact that this is telling us something; namely, that the z are not randomly produced at all but causal with preferred values near the ones identified by Karlsson.

The question then is what is the cause. But the mainstream hasn't gotten to that point because it simply denies that any quasar redshifts might be quantized. So first things first. Let's get to the point of you accepting that at least some redshifts appear to be quantized, then we can go searching for a physical mechanism to explain that. Fair enough, David?

SO, to answer your question, my procedure is most definitely capable of distinguishing between a uniform (random) and a non-uniform (causal) distribution of z. If we sample a relatively small number of small areas in the sky (say a small group of the quasar/galaxy associations) where there are r quasars and repeatedly find that the observed z are close to a few specific values of z, then it is obvious the probability of finding those values of z is more likely the result of a process that produces them than one which produces a uniform distribution of z (and a very low probability of ever seeing those cases).

Now if you can't understand the above explanation, David, then I can't help you further. If you publish your community forum attack on me, should I decide to respond at all (and I may not), I shall simply repeat this and the next post and thereby show that despite your supposed knowledge of probability and statistics, you missed a very serious and frankly very obvious error in my earlier calculation of probability, and that you still don't appear to understand rather simple logic for why my methodology can distinguish between a relatively uniform (random) distribution of z (like the mainstream's) and one (like that theorized by Karlsson) which would be more causally based. :D
 
Now, I want to explain the error I made and correct my methodology accordingly. Let's start by recalling that I claimed the probability of finding r specific values drawn without regard to order from a uniform distribution of n different values is 1/(n!/((n-r)!r!)) = r!/(n!/(n-r)!). But that's wrong.

To find the probability of a set of r specific values picked randomly from a distribution of n different values, we actually need to ratio the number of ways one can pick those r values from the distribution by the number of ways one can pick any r values from the distribution. Right?

For example, if we have a distribution with 5 possible values (call them a,b,c,d,e) and we want the probability of seeing c and d show up in a random draw of 2 values from that pool of 5 possibilities, we first need to find the number of ways we can draw c and d. Well that turns out to be r!, so the answer is 2 in that case.

Next, we need to divide by the number of ways one can draw ANY 2 values from the 5 possibilities. Note that drawing that value does not eliminate it from the pool. The formula to use here is nr. So there are 52 = 25 ways of drawing 2 values from a pool containing 5 different values.

So the probability of seeing c and d in a single observation in the above example is 2/25 = 0.08 = 8 percent.

So the formula I should have used in my calculation for the probability of seeing r specific values of z picked randomly from a distribution of n different values of z is

P = r!/nr.

Since nr > n!/(n-r)! we know that this probability will be somewhat smaller than what I previously calculated.

Now, instead of the probability of r specific numbers, we want the probability of a set of r observed values falling within a certain distance of those r specific numbers. It is this miss distance that determines the value of n we use.

If all the observations fell the same miss-distance from the r specific values, then the value of n would simply be found as follows:

n = possible range of values / increment ; where increment = 2 * miss distance .

But in reality, each of the r observations will likely fall a different miss-distance from the specific r number closest to it. So to handle this, instead of nr in the denominator of the probability equation, we substitute in ni1*ni2* ... snip ... * nir; where ij indicates that n is derived from the increment that corresponds to the miss-distance for the j-th data point.

Thus, P = r!*(1/ni1)*(1/ni2)* ... snip ... * (1/nir)

All would be well at this point, except the distribution of quasar redshift, z, between 0 and 3.0 is not really uniform. Mainstream literature indicates that the frequency of z actually rises from a small value (about zero) near z=0 to a maximum at about z=1, then stays roughly constant until it reaches z=3, where it then rather rapidly drops back to very near zero. A uniform assumption about the distribution will overestimate the effect on probability of data points with z<1 and underestimate the effect on probability of data points with z>1 within the range z=0-3.

I treated it as such in the previous calculations and demonstrated, when challenged about this, that at least for a few of the cases in question that assumption was likely conservative because of other factors in the calculation that also affected the relative weighting of the individual data points.

But this new form of the probability equation raises an interesting possibility ... that of directly accounting for the non-uniform nature of the mainstream's distribution of z. Suppose we weight the terms associated with specific data points (i.e., (1/nij) where j is the data point)? Since these terms are multiplicative, we should use a power law.

The weights should be based on the frequency of each data point relative to the average frequency they would have were they from a uniform distribution instead.

Now in our problem, the frequency rises from zero at z=0 to a maximum at z=1.0 and then stays constant to z=3.0. The area under that frequency distribution should sum to 1 over the entire range giving this equation:

1 = 1/2*maximumnum+2*maximumnum; where the m denotes that this is the non-uniform mainstream distribution of z.

The maximum value of the frequency can then be found:

maximumnum = 1/2.5 = 0.400

Now we find a uniform distribution from 0 to 3 that has the same area.

1 = 3*maximumu

maximumu = 0.333

The weights assigned to given z in the non-uniform mainstream distribution depend on where they lie between 0 and 3. A uniform distribution assumption underweights the importance of any z over 1. To correct this, any z over 1 will get a weight of 0.4/0.333 = 1.201 in the analysis. Any z under 1 is overweighted if a uniform distribution is assumed. To correct this, any z under one will get a weight less than 1. At z=0.3 the weighting factor is 0.36 while at z=0.6 the weighting factor is 0.72. The weighting equation can be written thus:

w = z*1.201; w<=1.201; 0<z<=3.0 .

The effect of this when dealing with terms that are all less than one will be to make the final probability smaller if the weight is > 1 and make it larger if the weight is < 1. As it should. This may not be the exact weighting that should apply, but I do think it will serve as a first approximation in dealing with the particular concern.

So, to summarize, my new approach to calculating the probability of seeing r observed z values at any galaxy under study, given the mainstream's assumptions about quasars, will be to find an appropriate increment for each z value, determine an n from each of those increments, determine a weight to apply to each n, then multiply them all together as follows:

P1G = r!*(1/ni1)w1*(1/ni2)w2* ... snip ... * (1/nir)wr
where

nik = 3.0/(2 * distance to the nearest Karlsson value) for the k-th z value,

wk = z*1.201; w<=1.201; for the k-th z value,

and

r = number of z values.

Any problems with that folks?

Now the probability of finding a particular observation, given the mainstream's assumptions, obviously goes up with the number of quasar/galaxy associations that are studied. So what is the probability of seeing a given observation if we were to look at all the quasar/galaxy associations that are possible in the sky? I think this is a useful indication of whether finding half a dozen (or so) very low probability cases after examining only a fraction of the possible quasar/galaxy associations is indicative of a problem. If the probability of finding the case is still very low even if we looked at all the possible associations, then it's highly likely there is a problem in the mainstream's theory regarding the cause of redshift (for quasars at least). Whether the mainstream proponents will admit this or not is another matter. :)

So the next question to answer is what is the maximum possible number of galaxies in the sky with r associated quasars? I shall call that quantity NmaxGwithr.

To find this, I will be assuming a number of things (some revised from previous calculations as well, based on better information). Those I'm debating are always free to offer specific alternative values for these parameters. If they don't, I can only assume they agree with them and that any complaints are merely hand-waving in stubborn defense of the mainstream theory.

First, the SDSS survey is a relatively complete sampling of all observable quasars. The SDSS website indicates it accounts for over 90%, in the areas that have been surveyed. But I'm going to conservatively assume that they only found 75% the quasars that exist and could be observed in that survey. So I will increase the number of quasars SDSS found in the portion of the sky they surveyed by 33% and then use that to compute the number of observable quasars across the entire sky.

Now the SDSS surveyors say (according to http://cas.sdss.org/dr6/en/sdss/release/ ) that their DR6 effort (the latest) found 104,140 quasars over in 6860 deg2 area. That works out to about 15 quasars deg-2. Following what I stated above, I'm going to increase that to 20 quasars deg-2. Now the surface area of a sphere has about 4 PI (180/PI)2 = 41,250 square degrees so if there are 20 quasars per square degree (over the range of magnitudes we can observe) then there are a possible 825,000 observable quasars in the sky. Anyone want to claim that's not a conservative estimate for the total number of observable quasars in the sky?

Next, there is the question of how those quasars are distributed with respect to low redshift galaxies and to each other. For now, I will conservatively assume that only half of them are near low redshift galaxies. I think that's a VERY conservative assumption and would be very interested in any data that would further refine that parameter. Afterall, there aren't that many low redshift galaxies. In fact, I showed earlier in the thread (post #223) that perhaps no more than 1 percent of the galaxies lie within z = 0.01, which equates to 1-2 galaxies per square degree of sky. I found a source that indicated most of the galaxies in near field surveys are smaller than 30-40 arcsec in diameter ... meaning they occupy only a fraction of any given square degree field (because 1 arcsec is 1/3600th of a degree). I noted a source that indicates even in galaxy groups, the distance between galaxy members is typically 3-5 times the galaxy diameter. I noted that even the large, naked eye, Andromeda galaxy ... our nearest neighbor ... only has an apparent diameter of 180 arcmin (that's 1/30th of a degree). And I noted that NGC 3516, one of the cases I calculate only has an apparent diameter of a few arcmin. It may be typical of specific galaxies I am looking at. So with only 20 quasars per square degree of sky on average, does everyone agree I'm very conservative in assuming only half of all quasars lie near low redshift galaxies like those in each of the cases of interest? If so, then that brings us down to 413,000 quasars in the population of interest. And I suspect the number really should be smaller.

And how are those quasars distributed amongst the low redshift galaxies? In other words, are they spread out evenly over 413,000 different galaxies or do they all lie near one galaxy? Now previously, I assumed that half the quasars are in groups of r. That would have meant that the maximum possible number of quasar/galaxy associations I would use in the following calculations would be 207,000/r. But I think that is far too conservative an assumption ... that high r associations are much rarer than I assumed. And I think the proof of that is how few high-r cases can be specifically identified by anyone. And certainly the Arp et. al. community would like to list as many as possible.

So, instead, I'm going assume that at each r level, one third of the remaining quasars are distributed at that r level. I think this will be much more consistent with the mainstream assumption that quasars have no connection to each other or to low redshift galaxies. Thus, at r=1 (in other words, where there is only 1 quasar near a low redshift galaxy), 137,000 of the quasars will be distributed. That leaves 275,000 quasars. And after distributing the r=2 quasars, there are 184,000 left. And then one-third of those will be in r = 3 associations. That means there are 61,333 quasars in r=3 associations for a total of 20,444 possible r=3 quasar/galaxy associations in the sky. That leaves 122,000 quasars still to distribute. One third of those in r=4 associations means there are 10,166 possible r=4 cases. And if one continues this logic one arrives at the following number of possible quasar/galaxy associations for each r of interest:

NmaxGwithr =

20444 at r=3
10166 at r=4
5433 at r=5
3018 at r=6
1724 at r=7
1000 at r=8
596 at r=9
358 at r=10
216 at r=11
132 at r=12

Do any of my opponents on this thread want to disagree with this assumption? If so, please provide a specific distribution that I should use and tell us why you think that's more reasonable. If you don't, I can only assume you think this one is or at least puts the best face on it from the standpoint of the results you would like to see. :)

So we arrive at the final probability of seeing the specific values of z around each of the cases in question were we to examine all the possible quasar/galaxy associations in the sky assuming the mainstream's theory is correct:

PzTotal = P1G * NmaxGwithr

I believe that if this probability is small (<1), then this is a good indication that the mainstream model regarding the origin of quasar z in these specific observations (and perhaps more generally) is badly flawed. And the less the probability, the more likely it is that z for quasars are instead quantized.

But I have one last factor to add, the likelihood of seeing that observation's particular alignment of quasars relative to the minor axis. Since they are independent of z under the mainstream model, this probability should be multiplicative with PzTotal. Now that I think about it some more, I don't like the way I calculated this probability earlier.

I want to find the probability of a specific observation being encountered given the number of quasars observed to be within a 15 degree zone centered about the minor axis. I showed earlier there is a probability of only 0.083 (based on area) that any given quasar will lie in that zone, assuming quasars are positioned randomly like darts (which is the mainstream assumption). And since quasar locations are assumed to be independently located in the mainstream theory, those probabilities are multiplicative.

But if there are quasars in the observation outside the 15 degree zone, the probability of encountering that observation should be increased. In fact, for every quasar outside the zone, I believe the probability of encountering that observation should be increased by a factor of 1/.917 = 1.091, up to a maximum probability of 1. Thus, I will calculate the probability of seeing the alignment in a single observation as

Pafor1G = 0.083(number of aligned quasars)*1.091(number of non-aligned quasars); Pafor1G <= 1 .

In calculating this probability, I will assume (I think conservatively) that any quasars near a galaxy and whose location with respect to the minor axis is unknown are non-aligned.

Now again, the probability of seeing that alignment observation if we examined every possible quasar/galaxy alignment with r quasars might be a good indicator of whether there is a problem with the mainstream model for generating quasars. So I will calculate

PaTotal = Pafor1G * NmaxGwithr .

If the probability is < 1 then that is an indication the mainstream model is wrong. In such a case, seeing multiple highly unlikely examples of quasars that are aligned with the minor axis would be an indication there is some underlying physics which is producing that phenomena and which the mainstream is ignoring. And the Narlikar/Arp et. al. model might be that phenomena and therefore deserve a much closer look by the mainstream.

And since the alignment and z phenomena are independent according to the mainstream model,

PTotal = PzTotal * PaTotal

And, by the way, with regards to both PzTotal and PaTotal, keep in mind that Arp et. al. have not examined a large fraction of the total number of possible quasar/galaxy associations with r quasars. In fact, the probability of encountering the group of observations studied here will decrease in direct proportion to the percentage of total possible quasar/galaxy associations that has actually been studied. This even further strengthens my argument.

Now can we all agree on this approach, or not? Does anyone have any suggestions to improve it ... to make it more accurate? Would anyone like to adjust one of the parameters or equations? Speak up and don't be coy. Or I'll rightly assume you agree with all aspects of this method. :)

Now, with the above as the basis, I will now demonstrate that correcting the errors in my method had no significant no impact on the overall conclusion I drew from my original calculations. I will redo the full calculation for each of the observations I've previously analyzed. This should serve as a nice summary of the overall method and my current results to this point, and hopefully act as a basis for further debate on this subject.

Let's start with NGC 3516.

In this case, observed z = 0.33, 0.69, 0.93, 1.40, 2.10 . The Karlsson z = 0.3, 0.6, 0.96, 1.41, 1.96, 2.64 . Therefore, the spacings are +0.03, +0.09, -0.03, -0.01, +0.14 . Doubling the spacings gives the increment width for each data point = 0.06, 0.18, 0.06, 0.02, 0.28 . The n corresponding to each increment is (3/0.06), (3/0.18), (3/0.06), (3/.02), (3/0.28) = 50, 16, 50, 150, 10 . The weighting factor for each data point is 0.40, 0.83, 1.12, 1.201, 1.201 .

Thus, P1G = 5! * (1/50)0.40 * (1/16)0.83 * (1/50)1.12 * (1/150)1.201 * (1/10)1.201 = 120 * 0.209 * 0.100 * 0.0125 * 0.0024 * 0.063 = 4.7 x 10-6 (compared to 2 x 10-6 without the weighting factors).

Now with NmaxGwith5 = 5433, so PzTotalNGC3156 = 4.7 x 10-6 * 5433 = 0.026

Now to factor in the alignment probability. In this case, I haven't found anything to suggest there are other quasars besides the 5 identified by Arp et. al. near this galaxy. If anyone out there can show there are additional quasars within 50 arcmin (given that the galaxy is only a few arcmin across), I will add them to the calculation as non-aligned if we don't know their location, or do and they aren't aligned. Until then, it looks like

PaTotalGNGC3516 = 0.0835 * 5433 = 0.021.

Thus, PTotalGNGC3516 = 0.026 * 0.021 = 0.00055 .

Wow! That's very small considering it assumes all the possible quasar/galaxy associations in the sky have been examined.

Now consider with NGC 5985.

In this case, observed z = 0.69, 0.81, 1.90, 1.97, 2.13 according to http://articles.adsabs.harvard.edu//full/1999A&A...341L...5A/L000006.000.html . With Karlsson z = 0.06, 0.3, 0.6, 0.96, 1.41, 1.96, 2.64 , the spacing to the nearest Karlson values are +0.09, -0.15, -0.06, +0.01 and +0.17. The increment for each is 0.18, 0.30, 0.12, 0.02, 0.34 so n equal 16, 10, 25, 150 and 8. The weighting factors are 0.83, 0.97, 1.201, 1.201, 1.201.

Thus, P1G = 4! * (1/16)0.83 * (1/10)0.97 * (1/25)1.201 * (1/150)1.201 * (1/8)1.201 = 24 * 0.100 * 0.107 * 0.021 * 0.0024 * 0.082 = 6.8 x 10-7
PzTotalNGC5985 = 6.8 x 10-7 * 10166 = 0.007 .

Of the above 5 quasars, 4 are aligned. In addition, DRD provided a source that suggested the existance of two more quasars and with no information about them besides their z, they will be assumed non-aligned. The total number of cases with 8 quasars is

PaTotalGNGC5985 = 0.0834 * 1.0913* 1000 = 0.062 .

Thus, PTotalGNGC5985 = 0.007 * 0.062 = 0.00043 .

Even smaller! So now it's a little hard to believe we just got lucky seeing these observations in the first case.

Now consider with NGC 2639.

In this case, http://www.journals.uchicago.edu/doi/abs/10.1086/421465 identifies observed quasar z = 0.305, 0.323, 0.337, 0.352, 1.304, 2.63 . With Karlsson z = 0.06, 0.3, 0.6, 0.96, 1.41, 1.96, 2.64 , the spacing to the nearest Karlson values are +0.005, +0.023, +0.037, +0.052, -0.106, 0.01. The increment for each is 0.01, 0.046, 0.074, 0.104, 0.212, 0.02 so n equal 300, 65, 40, 28, 14, 150 .. The weighting factors for each are 0.366, 0.388, 0.405, 0.422, 1.201, 1.201.

Thus, P1G = 6! * (1/300)0.366 * (1/65)0.388 * (1/40)0.405 * (1/28)0.422 * (1/14)1.201 * (1/150)1.201= 720 * 0.123 * 0.198 * 0.224 * 0.245 * 0.042 * 0.0024 = 9.7 x 10-5 (compared to the unweighted probability of 1.6 x 10-8!)

PzTotalNGC2639 = 9.7 x 10-5 * 3018 = 0.29 (and it may be much smaller than that if my weighting method is faulty).

Of the above 6 quasars, 5 are aligned. The paper also mentions some 3 other x-ray sources lying along the axis but I shall ignore them.

PaTotalGNGC2639 = 0.0835 * 1.0911* 3018 = 0.013 .

Thus, PTotalGNGC2639 = 0.29 * 0.013 = 0.0038 .

Again, a probability that is very small considering that it assumes that all the possible quasar/galaxy associations have been examined. Now are you starting to get the picture, folks?

Now what about NGC 1068?

Recall that http://www.journals.uchicago.edu/doi/abs/10.1086/311832 and http://arxiv.org/abs/astro-ph/0111123 and http://www.sciencedirect.com/scienc...serid=10&md5=596c8badf26d1a60f6786ae0bfcae1d6 collectively list 12 quasars with z = 0.261, 0.385, 0.468, 0.63, 0.649, 0.655, 0.684, 0.726, 1.074, 1.112, 1.552 and 2.018. With Karlsson z = 0.06, 0.3, 0.6, 0.96, 1.41, 1.96, 2.64 ... the distance to the nearest Karlsson value for each quasar is: -0.039, +0.085, +0.132, +0.03, +0.049, +0.055, +0.084, +0.126, +0.104, +0.152, +0.142, +0.058 . That makes the increments 0.078, 0.170, 0.264, 0.06, 0.098, 0.11, 0.168, 0.252, 0.208, 0.304, 0.284, 0.116 and n values 38, 17, 11, 50, 30, 27, 17, 11, 14, 9, 10, 25 . The weights for each is 0.313, 0.462, 0.562, 0.76, 0.78, 0.79, 0.82, 0.87, 1.201, 1.201, 1.201, 1.201 .


Thus, P1G = 12! * (1/38)0.313 * (1/17)0.462 * (1/11)0.562 * (1/50)0.76 * (1/30)0.78 * (1/27)0.79 * (1/17)0.82 * (1/11)0.87 * (1/14)1.201 * (1/9)1.201 * (1/10)1.201 * (1/25)1.201 = 479002600 * 0.320 * 0.270 * 0.260 * 0.051 * 0.070 * 0.074 * 0.100 * 0.124 * 0.042 * 0.005 * 0.063 * 0.021 = 9.7 x 10-6 (compared to the unweighted probability of 2.8 x 10-7!)

PzTotalNGC1068 = 9.7 x 10-6 * 132 = 0.0012 .

Since I don't really know the alignment of these quasars relative to the minor axis,

PaTotalGNGC1068 = 1.0

Although as I pointed out previously, there are curious and improbable alignments to be found in this case:

http://www.journals.uchicago.edu/cgi-bin/resolve?ApJ54273PDF "On Quasar Distances and Lifetimes in a Local Model, M. B. Bell, 2001 ... snip ... It was shown previously from the redshifts and positions of the compact, high-redshift objects near the Seyfert galaxy NGC 1068 that they appear to have been ejected from the center of the galaxy in four similarly structured triplets."

But in any case, we currently have PTotalGNGC1068 = 0.0012 .

So yet again, an incredibly small probability given that this calculation assumes all possible quasar/galaxy associations have been examined.

Dare we draw any conclusions here, folks? Or will the mainstream community be allowed to simply sweep this under the rug?
 
David, I need to thank you. In the process of preparing a post to address your question (and show that you haven't even tried to understand my methodology), I discovered that I made a mistake in the formula I used to calculate probability.

I will describe and correct that error in my next post, where I will once again calculate the probability of observing the specific set of high red-shift quasars that have been found near several specific low-redshift galaxies, assuming the mainstream claim that z are continuously distributed over the interval z=0-3 is true.

The error I made doesn't change the conclusion I reached nor the answer I will give to your question. But it is an error which I think someone who works in the field of probability and statistics, like you, probably should have caught. From the fact that you didn't, I can only speculate that either aren't an expert (and mind you, I'm not claiming to be one) or you didn't take the time to try to understand my methodology. :)
I am not an expert, nor have i stated that I am one and I did say I was mystified by your using Bayes theorm. I have tried to understand your methodlogy. I disagree with it's application. Bayesian theory is used in very specific situations and the population distribution of traits is not one of them.

You are the one who obviously did not take the time to read the three posts I wrote on statistics and population theory and sampling.

So what does that make you?


But that is your thing.
In any case, let me now address your question and please note that the answer was right in front of you.

Your question is how can my methodology show a difference between a random placement and a causal one? But to be more precise, your question should be restated: how can my methodology distinguish between the existence of physics which produce a relatively uniform distribution z (like the mainstream model) and physics which produce a highly non-uniform distribution of z (like Karlsson's quantized values)?
No in this thread the question sis:

How can your methodology shopw a difference in placement association of QSOs in relation to Arp galaxies?

But thanks for just ignoring that.
I want you to think carefully about what I'm calculating. I find the probability that a specific observation of quasar z values would occur, assuming that the mainstream assumptions about quasar/galaxy associations and quasar redshift (z) distribution are true. More specifically, I'm calculating in each case the probability of encountering an observation with a set of z's that match the Karlsson values within the accuracy that the observation actually matches the Karlsson values, while assuming that the z values are not quantized around Karlsson values but instead come from a relatively uniform distribution of z between 0 and 3.
And your numbers would be exactly the same. For distribution of QSOs or for the z-values. You do know that Bayes theorem is no longer used in most statitical setting like epidemiology and drug efficacy don't you?

If I gace you a random placement of QSOs around galaxies and a causal placement of QSOs around galaxies, your method would give the same numbers.

It can not distiguish them.

Why won't you answer that question?

Just as if I made a random set of z values and a causal one, you could not distinguish them.

Why not answer that?

Why is that BAC, why are you using Bayes theorem in a method which is discredited and not used in comparable studies?

Hmmm?
If we chose

If the observation exactly matches the Karlsson values then the probability calculated by this method is very, very small. If the observation doesn't match the Karlsson values, then the calculated probability is much, much larger. All the cases I've studied here have produced very small probabilities. And yet each of those cases actually exists. Does that perhaps suggest something?
yes aposteiori statitics that could be effeceted by sample bias, small sample size or sample error.

None of which you address.
Can we agree that the low probabilities I've calculated for seeing specific observations with a specific set of z values, assuming a random "placement" of z values, are correct and very small (<<1)?
No because of the sample issues , which you haven't addressed, and refuse to address, even though they are the point of the thread.
Can we agree that the probability of seeing those observations would still be small (<1) even if a VERY large number of galaxies were examined ... even if all the galaxies that could potentially have quasars near them were examined. That's what my calculations indicate.
And your calculations would be the same for a random set vs. a causal set.

Do you understand that or just ignore it?
Yet in spite of that fact, multiple observations with those low probability z combinations have actually been observed. Doesn't that suggest the placement of z values might not be random?
No. No more so than a royal flush in stud poker would be non random.

That is the issue to address.

Which you haven't, you keep ignoring th issue that i have pointed out.

yes the specific probability of any configuration is low, that does not tell you if it is random or causal.

Why not address that, which was addressed in the three posts I made on statitics.
And if the number of galaxies actually examined so far to obtain those observations isn't anywhere near as large as the number of potential galaxies with quasars near them on which that <1 probability is based, doesn't that STRONGLY suggest the placement of z values isn't random? Isn't this obvious, David?
Not it's not , a random set can only be distinguished from a causal set through the methods that you are NOT using.

Again I refer you to the first of the three posts I made I believe it was 165.
In other words, given a relatively limited number of observations, we simply don't expect to see any cases where all or most of the z values are close to Karlsson values.
No, that is not true, that is like saying that you should not get a full house in stud, no draw poker unless it is non random.

Which is an error.
Frankly, we should be surprised if we do see such a case. But if we see only one, we might think "well, we got lucky". But when we see two, we probably should begin to get a little suspicious. And when we see half a dozen observations which are calculated to be low probability cases, maybe it's time to wake up to the fact that this is telling us something; namely, that the z are not randomly produced at all but causal with preferred values near the ones identified by Karlsson.
Your ignorance is showing.
The question then is what is the cause. But the mainstream hasn't gotten to that point because it simply denies that any quasar redshifts might be quantized. So first things first. Let's get to the point of you accepting that at least some redshifts appear to be quantized, then we can go searching for a physical mechanism to explain that. Fair enough, David?
You haven't demonstrated anything except your use of bayes theorem is misplaced.

it would give the same values for a random and a causal set.
SO, to answer your question, my procedure is most definitely capable of distinguishing between a uniform (random) and a non-uniform (causal) distribution of z.
No it is not. it would give the same values for a random and a causal set.

The specific probaility is always going to have a low value, which is why you have to look at the distribution of values in samples of large sets.
If we sample a relatively small number of small areas in the sky (say a small group of the quasar/galaxy associations) where there are r quasars and repeatedly find that the observed z are close to a few specific values of z, then it is obvious the probability of finding those values of z is more likely the result of a process that produces them than one which produces a uniform distribution of z (and a very low probability of ever seeing those cases).
No it is not.
Now if you can't understand the above explanation, David, then I can't help you further. If you publish your community forum attack on me, should I decide to respond at all (and I may not), I shall simply repeat this and the next post and thereby show that despite your supposed knowledge of probability and statistics, you missed a very serious and frankly very obvious error in my earlier calculation of probability, and that you still don't appear to understand rather simple logic for why my methodology can distinguish between a relatively uniform (random) distribution of z (like the mainstream's) and one (like that theorized by Karlsson) which would be more causally based. :D

You have just asserted that you can tell the difference, but you are wrong.

Where in population studies do they use bayes theorem to determine causality?

You do know the difference between causality and correlation don't you?

I doubt it.
 
Last edited:
BAC, you are basically saying that someone who gets a fullhouse in a stud poker came can only do so from a non-random process.

Which is wrong.

Here is the probaility of any given hand in five card stud, no draw.

(1/52)5= 2.8301 x10-9
So any hand dealt in poker is from a non-random process!

I name you Iron Troll.
 
Last edited:
A quick summary of contemporary, mainstream, answer to 'what's a quasar?'

A (type 1) quasar is a luminous active galactic nucleus (AGN) for which we have an unobscured view of the accretion disk, broad line region (BLR), and narrow line region (NLR). The cutoff between a (type 1) Seyfert and a quasar is arbitrary, typically MB > -22 (where MB is the absolute B band magnitude).

An AGN for which our view of the accretion disk and BLR is obscured is a type 2 quasar or a type 2 Seyfert.

While the conditions under which an AGN becomes a strong radio source are not yet well understood, AGNs with strong jets do produce double-lobed radio sources.

If the dusty torus obscures our view of even the NLR, an AGN is usually visible as only an x-ray source (if there are strong jets, they may also be radio sources).

If the viewing geometry is such that we are looking (almost) directly down one of the jets, the AGN is seen as a BL Lac, blazer, or an OVV quasar. These AGNs are also (often) both x-ray and radio sources.

It is not often appreciated just how small, physically, an AGN is: the dusty torus, BLR, accretion disk, and SMBH are all contained within a region that may be only light-days across, and rarely more than a light-month or so; of course the NLR may be up to several light-years in size, and the jets and radio lobes may stretch for tens of thousands of light-years (or more).

Here is a webpage with a brief summary of the various kinds of AGN.

AGNs are one of the most active areas of research in extra-galactic astronomy and astrophysics; for example ADS lists 357 references with 'AGN' in the title for just 2007. Some of the research topics are:

* what is the AGN duty cycle? For how long does an AGN remain 'on'? how long does it stay quiescent before turning 'on' again?

* what fuels an AGN? How is mass funnelled into the accretion disk?

* how do AGNs evolve? What is the role of galaxy collisions/mergers in AGN evolution?

* what are the physical mechanisms that generate the dusty torus? That determine its size and nature?

* what proportion of AGNs are obscured? How does this proportion change with AGN evolution?

Historically, the general acceptance of quasars being at distances commensurate with their redshifts (via the Hubble relationship) was about a decade after they were discovered (i.e. in the 1970s). The unified AGN model became generally accepted about a decade later, and even contrarians such as Bell acknowledge that the various classes of observational objects are all the same 'real' class of objects (AGNs), with dividing lines between them simply arbitrary divisions of continuous distributions. Thus the distinctions that Chu et al. and Arp et al. drew in their 20th century papers (e.g. on NGC 3516 and NGC 5985) between quasars and Seyferts as potentially different classes of 'real' objects, rather than arbitrary divisions of an AGN continuous distribution, is now very rare.

Today's better understanding of AGNs also explains rather nicely one curious thing which some Arpians are (still) fond of pointing out: obscured AGNs show up as x-ray (point) sources. As x-ray astronomy developed it became increasingly easy to determine which point sources were AGNs and which XRBs (x-ray binaries) in (local) galaxies, leading to the discovery that many 'ULX' (ultra-luminous x-ray sources) 'in' or 'near' local galaxies were, in fact, AGNs … and that the sky density of such implied that obscured AGNs outnumber unobscured ones.

One of the most exciting recent discoveries concerning AGNs is the statistical association between some ~30 EeV cosmic rays and AGNs within ~100 Mpc of us (by the Auger Collaboration) – for the first time we have the possibility of doing extragalactic astronomy with something other than photons (well, the second … a handful of neutrinos from SN 1987a were detected); this result, if confirmed, is another demonstration that AGNs are not local in the Arpian sense, and that the CMB is indeed cosmic.
 
Dancing David, just one comment on the use of Bayesian statistics in contemporary astronomy: it is used, quite widely. While you may find examples of such an approach being used incorrectly (after all, scientists are human and do make mistakes), I doubt you'd find anything much you'd take exception to.

For example, the Final HKP paper (Freedman et al. 2001) that I mentioned earlier in this thread presents three different approaches to making an estimate - traditional (frequentist), Bayesian, and Monte Carlo. For most purposes this would be an over-kill; given how central H0 is in extra-galactic astronomy and cosmology, that the authors took the caution of doing their sums three different ways is A Good Thing (and that the answers were consistent An Even Better Thing!).
 
Thanks for all your information, Bayesian does have uses but I am not sure of BAC's use of staitistics in general, he just demonstrated that poker hands must be non-random.

That AGN stuff is so cool.

But ignored by BAC.
 
Last edited:
{snip}

In any case, let me now address your question and please note that the answer was right in front of you.

Your question is how can my methodology show a difference between a random placement and a causal one? But to be more precise, your question should be restated: how can my methodology distinguish between the existence of physics which produce a relatively uniform distribution z (like the mainstream model) and physics which produce a highly non-uniform distribution of z (like Karlsson's quantized values)?

I want you to think carefully about what I'm calculating. I find the probability that a specific observation of quasar z values would occur, assuming that the mainstream assumptions about quasar/galaxy associations and quasar redshift (z) distribution are true. More specifically, I'm calculating in each case the probability of encountering an observation with a set of z's that match the Karlsson values within the accuracy that the observation actually matches the Karlsson values, while assuming that the z values are not quantized around Karlsson values but instead come from a relatively uniform distribution of z between 0 and 3.

If the observation exactly matches the Karlsson values then the probability calculated by this method is very, very small. If the observation doesn't match the Karlsson values, then the calculated probability is much, much larger. All the cases I've studied here have produced very small probabilities. And yet each of those cases actually exists. Does that perhaps suggest something?

Can we agree that the low probabilities I've calculated for seeing specific observations with a specific set of z values, assuming a random "placement" of z values, are correct and very small (<<1)? Can we agree that the probability of seeing those observations would still be small (<1) even if a VERY large number of galaxies were examined ... even if all the galaxies that could potentially have quasars near them were examined. That's what my calculations indicate.

Yet in spite of that fact, multiple observations with those low probability z combinations have actually been observed. Doesn't that suggest the placement of z values might not be random? And if the number of galaxies actually examined so far to obtain those observations isn't anywhere near as large as the number of potential galaxies with quasars near them on which that <1 probability is based, doesn't that STRONGLY suggest the placement of z values isn't random? Isn't this obvious, David?

In other words, given a relatively limited number of observations, we simply don't expect to see any cases where all or most of the z values are close to Karlsson values. Frankly, we should be surprised if we do see such a case. But if we see only one, we might think "well, we got lucky". But when we see two, we probably should begin to get a little suspicious. And when we see half a dozen observations which are calculated to be low probability cases, maybe it's time to wake up to the fact that this is telling us something; namely, that the z are not randomly produced at all but causal with preferred values near the ones identified by Karlsson.

{snip}
.
I'm going to comment on only parts of this, in line with what I've been writing all along, namely the inputs and assumptions built into those inputs (not the calculations themselves).

First, the only way to do this kind of analysis, properly, is to start with the physical models; in the case of 'Karlsson peaks', I don't recall you having actually presented any physical model(s).

Second, if the 'Karlsson peaks' are indeed fully quantized, then they have precise values, to as many decimal places as you care to calculate.

Third, as I read the 'Karlsson peak' literature, there is only a general consensus on what these are, more or less; the actual values, to 3 or 4 significant figures, seem to vary from paper to paper.

Fourth, the observational uncertainties are, usually, quite small - a z can be observed to 1.23x, where x can be given ± 1 (or perhaps ± 2). A corollary is that 'missing' a Karlsson peak by more than ~0.002 is the same as 'missing' one by 0.1, or even 0.5 ... unless, of course, the physical model that generate 'Karlsson peaks' accommodates 'misses' that are (much) greater than the observational uncertainties.

Fifth, as I have already said, more than once, the distribution of observed AGN redshifts is quite smooth, with no 'Karlsson peaks' of any note. Of course, if the physical model you are seeking to test applies to only a tiny subset of observed quasars ...

Sixth, as has been said many, many times, the kind of a posterori approach you are using for your analysis is invalid ... unless there is a strong a priori case for choosing just the 'cases' you select to apply that analysis to (NGC 3516, NGC 5985, etc). Such a case may be due to the physical model of Karlsson peaks, or something else entirely ... and such a case may be quite legitimate. However, it seems that you have not presented any such case (other than something like 'here's what Arp chose to observe').
 
Some quick comments on post #329.

If it's the 'mainstream' understanding of how 'quasars' are distributed, then the numbers chosen as inputs need to be replaced by estimates of AGNs, and such estimates need to take into account the (arbitrary) low-luminosity cutoff (boundary between type 1 Seyferts and quasars), the ratio of obscured to unobscured AGNs, the estimated number of as-yet-undetected blazars (etc), AGN evolution, lensing, and clustering.

Further, the approach in this post contains no 'null test'; for example, in addition to a (very) small number of large, bright (active?) galaxies, there should be at least an equal number of other large, bright galaxies (perhaps a range of galaxy types, including ellipticals, lenticulars, spirals, irregulars; dwarfs and giants; merging and colliding galaxies; ...).

In addition, if the distribution of redshifts of 'quasars' in the selected subset does not match that of the population (to within the relevant statistic), then by definition it is not a random selection.

The method outlined in this post also seems to assume that large, bright galaxies are not clustered in any way; clearly they are.

I think the 'low redshift galaxies' numbers (etc) are also incorrect, as in this paragraph:
Next, there is the question of how those quasars are distributed with respect to low redshift galaxies and to each other. For now, I will conservatively assume that only half of them are near low redshift galaxies. I think that's a VERY conservative assumption and would be very interested in any data that would further refine that parameter. Afterall, there aren't that many low redshift galaxies. In fact, I showed earlier in the thread (post #223) that perhaps no more than 1 percent of the galaxies lie within z = 0.01, which equates to 1-2 galaxies per square degree of sky. I found a source that indicated most of the galaxies in near field surveys are smaller than 30-40 arcsec in diameter ... meaning they occupy only a fraction of any given square degree field (because 1 arcsec is 1/3600th of a degree). I noted a source that indicates even in galaxy groups, the distance between galaxy members is typically 3-5 times the galaxy diameter. I noted that even the large, naked eye, Andromeda galaxy ... our nearest neighbor ... only has an apparent diameter of 180 arcmin (that's 1/30th of a degree). And I noted that NGC 3516, one of the cases I calculate only has an apparent diameter of a few arcmin. It may be typical of specific galaxies I am looking at. So with only 20 quasars per square degree of sky on average, does everyone agree I'm very conservative in assuming only half of all quasars lie near low redshift galaxies like those in each of the cases of interest? If so, then that brings us down to 413,000 quasars in the population of interest. And I suspect the number really should be smaller.

(to be continued)
 
And in case you forgot:

2. What mechanism and forces cause galaxies to rotate per perrat's model of galaxy rotation. What observation has been made to support it.
3. What would keep a Lerner plasmoid of 40,000 solar masses in an area of 43 AU diameter from collapsing to a black hole.
4. What makes star cluster rotate faster around a galaxy than gravity minus dark matter can provide?
5. What part of Narlikar's theory has any observable consequence, other than your gnome of intrinsic red shift?
6. How do you explain the quantity of light elements and heavy elements through PC/PU?
7. How does either Narlikar's theory or other PC/PU provide for the cosmic back ground radiation spectrum?

And in case you forgot, here are your own words at the start of this thread:


1. Please try to stay on topic, if you bring in material it should be directly related to the topic at hand. ( I know I am great derailer!)
2. Please do not spam the thread with multiple links that are unrelated to the topic, discussions of Dark Matter, Electric Universe, Plasma Cosmology or attacks on the same should be limited and relevamt to the thread.
3. Please do not use ad homs, character slurs or accuse people of reading comprehension problems, or attack spelling and grammar...

I've been trying to abide by them ... well, almost all of them. :D
 
The hypothesis that you are testing has to do with an idea that '(all) quasars' are ejected '(predominantly) along the minor axes' of 'active galaxies', and have redshifts 'at Karlsson peaks',

I'm not testing "all" quasars, DRD. I'm testing the quasars in these particular galaxies for which I'm doing calculations.

I tell you what, DRD. Since you don't seem to like the calculations I've done, why don't you show us what the probability of observing those cases is given the mainstream theory? And don't answer 1.0 because that's not an answer ... that's avoiding the question. Do an analysis like mine to calculate the probability of those observations. You can do that, can't you? :D
 
L-C&G did, in fact, count quasars more than once,

So you think their method overcounted quasars by a factor of 4? :rolleyes:

I also think your arithmetic and/or reading of a source is wrong ... if only because SDSS DR6 alone has confirmed spectra of >100k 'quasars' (and the spectrascopic area is only some 7k square degrees).

Nothing wrong with my arithmetic or reading since that's what I used in my calculation. :)

But mostly I think you yourself didn't actually read L-C&G ... Table 1 ("Anisotropy of QSOs with different constraints") lists, in the column "Number of pairs" (meaning, galaxy-QSO pairs) "8698" on the "1.0o" line in the column thetamax .

Again, the L-C&G paper states very clearly that the number of samples is insufficient to draw any conclusions. You seem to want to pick and choose ... accept some results and ignore others.
 
The two with the 'same' redshift as those in the Arp paper seem to be quite different quasars:

1537+595 NED01 and NED02 are both listed as being 7.5' from NGC 5985, and have redshifts of 2.132 and 1.968 (respectively); the two 'Arp' quasars have redshifts listed as 2.125 (12.0' distant) and 1.968 (25.2') (respectively).

Ok. Now that you clarify things, I'll agree that some additional quasars should be added in this case to the calculation. Let's do that:

Revised calculation for NGC 5985.

There are quasars at z = 0.69, 0.81, 1.90, 1.968, 1.968, 2.125, 2.132 and 3.88. With Karlsson z = 0.06, 0.3, 0.6, 0.96, 1.41, 1.96, 2.64, 3.48, the spacing to the nearest Karlson values are 0.09, 0.15, 0.06, 0.008, 0.008, .164, .172 (and I will ignore the z=3.88 datum as it's outside range of Karlsson values and quasar distribution that's defined). The increment for each is 0.18, 0.30, 0.12, 0.016, 0.016, 0.328, .344 so n equal 16, 10, 25, 187, 187, 9 and 8. The weighting factors (whether I should use them is a question) are 0.83, 0.97 and 1.201 for all the rest.

Now, P1G = 7! * (1/16)0.83 * (1/10)0.97 * (1/25)1.201 * (1/187)1.201 * (1/187)1.201 * (1/9)1.201 * (1/8)1.201 = 5040 * 0.100 * 0.107 * 0.021 * 0.0019 * 0.0019 * 0.071 * 0.082 = 2.3 x 10-8 (compared to 6.8 x 10-7 previously) with an unweighted probability of 5 x 10-7 (compared to 5 x 10-6 previously)

Since NmaxGwithr = 1724 for r = 7,

PzTotalNGC5985 = 2.3 x 10-8 * 1724 = 4 x 10-5[/sup = 0.00004 (compared to 0.007 previously). DRD, without even looking at the alignment probability, which will basically be unchanged from before since I seem to have used 7 quasars in that calculation with only 4 aligned, this is a very unlikely observation even were we able to examine every possible quasar/galaxy association in the sky. Wouldn't you agree? This is like my holding what you believe is a regular deck of cards, watching me shuffle them, and then begin turning over cards. I turn over the first 4 cards and find 4 of a kind. You think "what luck". I turn over then next 4 cards and reveal another 4 of a kind. You think "incredible" and "what are the odd of this happening". But if the next 4 cards turn out to be another 4 of a kind, you should start to wonder if your assumptions about the deck and shuffling are correct. :)
How about I give you the RA and Dec of all five, and you tell us whether they are sufficiently close to the minor axis to warrant including in your calculations?
Do you think it will help your case? :D
 
(continued)

BAC, would you mind taking a few minutes to write down what it is that you (or the model you are interested in) expect to find, in the vicinity of ('near') 'low redshift' 'galaxies'?

For example, do you expect that 'all' 'quasars' 'near' the 'minor axis' of such galaxies will have redshifts that are 'near' one of the 'Karlsson peaks'?

And for each of the key words (near (as in 'near' a low redshift galaxy), low redshift, galaxy, all, quasars, near (as in 'near' the minor axis), minor axis, and Karlsson peak), do you have a priori quantitative values/expressions?

One of my reasons for asking these questions is that you have to hand all that you need to test the model ... SDSS DR6 contains consistent, high quality data on an awful lot of 'low redshift galaxies' and 'quasars', perhaps enough to test your model far, far more extensively than can be done with just a handful of objects selected from papers by Arp et al.

In principle, it would be relatively easy to simply download the galaxy and quasar data from the SDSS data server and analyse it yourself.

Of course, to avoid the sort of a posterori issues that are so prominent in this thread, you'd need to state the hypotheses you intend to test, including the null hypothesis/ses, very clearly, before you begin the analysis. And, even better, if you create a mock catalogue (or 10,000 mock catalogues), and do some Monte Carlo analyses on these, before you start on the real data, you'd be in even better shape to address the inevitable questions about confirmation bias ...
 
DeiRenDopa said:
The hypothesis that you are testing has to do with an idea that '(all) quasars' are ejected '(predominantly) along the minor axes' of 'active galaxies', and have redshifts 'at Karlsson peaks',
I'm not testing "all" quasars, DRD. I'm testing the quasars in these particular galaxies for which I'm doing calculations.
.
The ones, and only the ones, that you have selected?

Or all quasars 'predominantly along' the minor axes?

Out to what distance?

How many 'particular galaxies'?

Or all galaxies of that kind (whatever kind that is)?

.
I tell you what, DRD. Since you don't seem to like the calculations I've done, why don't you show us what the probability of observing those cases is given the mainstream theory? And don't answer 1.0 because that's not an answer ... that's avoiding the question. Do an analysis like mine to calculate the probability of those observations. You can do that, can't you? :D
.

Well, the answer is, as you have already stated, 1 ... because they are observed to be there.

And no, I can't do any calculations of the kind you have done, if only because I (still) don't (yet) understand what those calculations are! See my last post.

In fact, I'd challenge any reader of this thread to repeat the kind of 'probability estimates' you have come up with, using a different set of input data ....
 
DeiRenDopa said:
L-C&G did, in fact, count quasars more than once,
So you think their method overcounted quasars by a factor of 4? :rolleyes:
.
I'll allow any reader sufficiently interested to go check for themselves (L-C&G are quite open about what they did, and why ...)
.
{snip}
But mostly I think you yourself didn't actually read L-C&G ... Table 1 ("Anisotropy of QSOs with different constraints") lists, in the column "Number of pairs" (meaning, galaxy-QSO pairs) "8698" on the "1.0o" line in the column thetamax .
Again, the L-C&G paper states very clearly that the number of samples is insufficient to draw any conclusions. You seem to want to pick and choose ... accept some results and ignore others.
.

Indeed.

Which leads right back to the question I asked ... if nearly 9000 quasar-galaxy pairs, involving ~70 galaxies, are inadequate for L-C&G to show a minor-axis anisotropy, what is it about ~20 (40?) quasar-galaxy pairs, involving ~4 galaxies, that makes your a posterori selection statistically valid?

Normally, when you increase the sample size by a factor of several dozen (or more), you expect a signal that is statistically significant in the smaller sample to become blindingly obvious in the bigger one ... yet this did not, apparently, happen; in fact the opposite happened - how come?
 
I'm not testing "all" quasars, DRD. I'm testing the quasars in these particular galaxies for which I'm doing calculations.

I tell you what, DRD. Since you don't seem to like the calculations I've done, why don't you show us what the probability of observing those cases is given the mainstream theory? And don't answer 1.0 because that's not an answer ... that's avoiding the question. Do an analysis like mine to calculate the probability of those observations. You can do that, can't you? :D

And your probability shows that a full house in five card no draw poker is from non-random processes.

You can't tell a honest dealer from a cheat with your method.
 
{snip}

Do you think it will help your case? :D
.
I don't know what 'case' you think I'm making (other than to try to understand what you are doing), but in any case, this thread is (as I understand it) an examination of the statistical methods you (and Arp et al.) have used ...

So far my comments have been limited to the inputs - 'what is a quasar?', how was X selected?, what about Y?, etc - rather than the actual calculations themselves.

But maybe I did present a 'case'; if so, would you please point to the post(s) in which I did so?
 
{snip}

There are quasars at z = 0.69, 0.81, 1.90, 1.968, 1.968, 2.125, 2.132 and 3.88. With Karlsson z = 0.06, 0.3, 0.6, 0.96, 1.41, 1.96, 2.64, 3.48, the spacing to the nearest Karlson values are 0.09, 0.15, 0.06, 0.008, 0.008, .164, .172 (and I will ignore the z=3.88 datum as it's outside range of Karlsson values and quasar distribution that's defined). The increment for each is 0.18, 0.30, 0.12, 0.016, 0.016, 0.328, .344 so n equal 16, 10, 25, 187, 187, 9 and 8. The weighting factors (whether I should use them is a question) are 0.83, 0.97 and 1.201 for all the rest.

{snip}
.
As I noted earlier, if the model being tested is 'quantized redshifts', and if those redshifts are 'Karlsson peaks', then first we need a precise definition of those peaks, and a calculation of the z values of those peaks to at least 3 decimal places.

Then we need the stated observational uncertainty of the observed redshifts.

Then, if a relevant peak is more than 3 sigma from an observed redshift, it counts as a 'miss'.

This of course makes the calculations a lot simpler - like heads and tails of flipping a coin.

However, if the model being tested is not 'quantized redshifts', then what is it?

In the absence of stated observational uncertainties we can't do a calculation of how well the data match the model; however, we can sketch one.

Suppose each redshift has an observational, 1 sigma, uncertainty of 0.001.

Then none of the quasars has an observed redshift within 3 sigma of a Karlsson peak (assuming BAC's numbers can be independently verified/validated) - the closest are two of 0.008.

Thus the NGC 5895 data are clearly inconsistent with a model that says quasars ejected (predominantly) along the minor axes of active galaxies have quantized redshifts corresponding to the Karlsson peaks.
 
I am not an expert, nor have i stated that I am one

I'm sorry. I guess I misunderstood your boasts that you are "used to population sampling", "trained in sampling theory, practiced sampling theory and read a lot of publications regarding sampling theory and research articles using sampling theory" and bold assertions that I don't understand probability. To me that seemed like a claim of expertise. :)

Bayesian theory is used in very specific situations and the population distribution of traits is not one of them.

Actually, you might be surprised at how widely used Bayesian theory is outside of your world, David. Maybe that's why you get over 5 million hits if you plug in "Bayesian" into Google. :)

How can your methodology shopw a difference in placement association of QSOs in relation to Arp galaxies?

Now you want to change your question? Actually, David, I don't think you even know the question. I'm not sure you even know what we are discussing here. :D

BeAChooser wrote: More specifically, I'm calculating in each case the probability of encountering an observation with a set of z's that match the Karlsson values within the accuracy that the observation actually matches the Karlsson values, while assuming that the z values are not quantized around Karlsson values but instead come from a relatively uniform distribution of z between 0 and 3.

And your numbers would be exactly the same.

Compared to what, David? Be more specific if you want me to understand what you are saying. Be aware that I already demonstrated to DRD that if one plugs in purely random values for z between 0-3 into my calculations instead of the observed z, one obtains VERY different results.

You do know that Bayes theorem is no longer used in most statitical setting like epidemiology and drug efficacy don't you?

Like I said, David, you apparently are unaware how widely used Bayes Theorem is outside your little domain. But then I'm not sure why you're even mentioning Bayes Theorem with regard to the above since that's an aspect of my analysis that comes AFTER the probability I described above is calculated.

If I gace you a random placement of QSOs around galaxies and a causal placement of QSOs around galaxies, your method would give the same numbers.

You are completely wrong, David. I already demonstrated that to DRD for one of the observations by picking a set of z out of my head and calculating a probabilty to compare to the one generated using the observed z in that case. Did you miss that?

I tell you what, why don't you prove it to yourself. Take the case of NGC 5985 that I just recalculated above and instead of using the observed z in that calculation, generate groups of 7 random values between 0.000 and 3.000 and plug them into the method. Ignore the weighting factors since that won't affect the conclusion. Compile a list of the probabilities you get and then compare the average of those probabilities to the unweighted probability I calculated using the 7 observed z. I predict your results won't even be close, David. Care to bet? :D

why are you using Bayes theorem in a method which is discredited and not used in comparable studies?

Bayes theorem isn't used in the above calculation of probability. Or have you not even figured that out, David? Hmmmm?

In other words, given a relatively limited number of observations, we simply don't expect to see any cases where all or most of the z values are close to Karlsson values.

No, that is not true, that is like saying that you should not get a full house in stud, no draw poker unless it is non random.

I'd love to play poker with you. Tell us, David ... knowing the likely distribution of cards in a shuffled deck, do you expect to see 4 of a kind if I turn over the first 4 cards in the deck? Do you expect to see the next four cards also be 4 of a kind? Because the probability of seeing those quasar observations I've calculated work out to LESS than the probability of seeing 4 of a kind in the above example.

You haven't demonstrated anything except your use of bayes theorem is misplaced.

You keep talking about Bayes Theorem and I'm not sure why given that the methodology in my last several posts doesn't involve it. And frankly, I don't think you even understand Bayes Theorem, David.

SO, to answer your question, my procedure is most definitely capable of distinguishing between a uniform (random) and a non-uniform (causal) distribution of z.

No it is not. it would give the same values for a random and a causal set.

Go ahead, David. Do that little calculation I suggested. Prove me wrong. Prove you get no significant difference in probability if you randomly draw z from a uniform distribution and compare it to what you get from drawing z from a distribution where all the values are close to the specific Karlsson values. Here's your chance to definitively prove me wrong, David. So go for it. :D
 
BAC, you are basically saying that someone who gets a fullhouse in a stud poker came can only do so from a non-random process.

David, I've said nothing of the sort. I grow weary of you again. Do the little Monte Carlo analysis I suggested and stop embarrassing yourself.

2.8301 x10-9

And do you expect to get a specific hand ... say a Royal Flush in Spades ... in the first few thousand hands, David? Apparently so. :D
 
I'm sorry. I guess I misunderstood your boasts that you are "used to population sampling", "trained in sampling theory, practiced sampling theory and read a lot of publications regarding sampling theory and research articles using sampling theory" and bold assertions that I don't understand probability. To me that seemed like a claim of expertise. :)



Actually, you might be surprised at how widely used Bayesian theory is outside of your world, David. Maybe that's why you get over 5 million hits if you plug in "Bayesian" into Google. :)



Now you want to change your question? Actually, David, I don't think you even know the question. I'm not sure you even know what we are discussing here. :D



Compared to what, David? Be more specific if you want me to understand what you are saying. Be aware that I already demonstrated to DRD that if one plugs in purely random values for z between 0-3 into my calculations instead of the observed z, one obtains VERY different results.



Like I said, David, you apparently are unaware how widely used Bayes Theorem is outside your little domain. But then I'm not sure why you're even mentioning Bayes Theorem with regard to the above since that's an aspect of my analysis that comes AFTER the probability I described above is calculated.



You are completely wrong, David. I already demonstrated that to DRD for one of the observations by picking a set of z out of my head and calculating a probabilty to compare to the one generated using the observed z in that case. Did you miss that?

I tell you what, why don't you prove it to yourself. Take the case of NGC 5985 that I just recalculated above and instead of using the observed z in that calculation, generate groups of 7 random values between 0.000 and 3.000 and plug them into the method. Ignore the weighting factors since that won't affect the conclusion. Compile a list of the probabilities you get and then compare the average of those probabilities to the unweighted probability I calculated using the 7 observed z. I predict your results won't even be close, David. Care to bet? :D



Bayes theorem isn't used in the above calculation of probability. Or have you not even figured that out, David? Hmmmm?



I'd love to play poker with you. Tell us, David ... knowing the likely distribution of cards in a shuffled deck, do you expect to see 4 of a kind if I turn over the first 4 cards in the deck? Do you expect to see the next four cards also be 4 of a kind? Because the probability of seeing those quasar observations I've calculated work out to LESS than the probability of seeing 4 of a kind in the above example.



You keep talking about Bayes Theorem and I'm not sure why given that the methodology in my last several posts doesn't involve it. And frankly, I don't think you even understand Bayes Theorem, David.



Go ahead, David. Do that little calculation I suggested. Prove me wrong. Prove you get no significant difference in probability if you randomly draw z from a uniform distribution and compare it to what you get from drawing z from a distribution where all the values are close to the specific Karlsson values. Here's your chance to definitively prove me wrong, David. So go for it. :D


So how would you use you method to determine that a placement is causal or random, which is the issue of the thread.

I say that the statitics that you and Arp use can not do that. You say that they can, so show me.

Thanks for paying some attention.

While you are at it why don't you address the issue of sample bias.?

I know that you have your own thoughts, but given the fact that Arp's and your analysis are not based upon distributions of frequency, how can you tell a random placement of QSOs around a galaxy from a causal one.

That is the thrust of the thread.
 
David, I've said nothing of the sort. I grow weary of you again. Do the little Monte Carlo analysis I suggested and stop embarrassing yourself.



And do you expect to get a specific hand ... say a Royal Flush in Spades ... in the first few thousand hands, David? Apparently so. :D

I expect that they will fall around a random distribution, unless the dealer is cheating. So they hands should follow distributio patters. You can only tell if it is cheating or random from a large number of hands.

Duh.

I notice again that you have not answered the question;

How can your method tell a random placement of QSOs around a galaxy from a causal one?

Perhaps you will talk about that and how small samples have a higher chance for bias?
 
A (type 1) quasar is a luminous active galactic nucleus (AGN) for which we have an unobscured view of the accretion disk

Your definition ASSUMES that you know for a fact that a quasar is a black hole. But you don't. You have only inferred that from your belief that redshift always equates to distance. Because quasars are relatively bright and (you think) very far away, and their energy output can change rapidly, you have inferred that it must be a black hole. But if redshift is not always related to distance as is the subject of this thread, then there would be no need to hypothesize a black hole to explain the energy output of a quasar.
 
First, the only way to do this kind of analysis, properly, is to start with the physical models; in the case of 'Karlsson peaks'

That's nonsense, DRD. Many an important discovery has been made because someone observed that existing theory could not explain an observation or outright contradicted it. And once they were convinced of that, scientists then went looking for an explanation. I don't have to have a physical model for why quasar redshifts might tend towards Karlsson peaks to show that they do, contrary to what the mainstream model would claim.

Second, if the 'Karlsson peaks' are indeed fully quantized, then they have precise values, to as many decimal places as you care to calculate.

Since you and David are apparently unable to show that the probabilities I've calculated are wrong, you resort to word games. I've never claimed ... nor has anyone ... not even Karlsson ... that quasar redshifts have precise numbers. The term quantized has been used here to denote a marked tendency TOWARD certain values. What term would you prefer I use for that? I'll be happy to oblige as long as it captures the essence of what is meant.

Third, as I read the 'Karlsson peak' literature, there is only a general consensus on what these are, more or less; the actual values, to 3 or 4 significant figures, seem to vary from paper to paper.

Why would you expect the values in different studies to be *precisely* the same given that they are based on statistical analysis of data? Whether the first peak is at 0.06 or 0.062 and the third peak at 0.96 or 0.953 is of no account. It doesn't affect the calculated probabilities in any significant way. You are simply obfuscating in order to avoid directly addressing those probability estimates and the methodology by which they were derived.

A corollary is that 'missing' a Karlsson peak by more than ~0.002 is the same as 'missing' one by 0.1, or even 0.5

This is absolutely bogus logic because no one is claiming that redshifts are exactly at Karlsson peaks in the Karlsson model. We don't know the physical model accounting for an apparent TENDENCY of redshifts to be near Karlsson values. But like it or not, the calculations clearly show there is such a tendency and maybe if astronomers were actually doing their job and not ignoring anything that threatens their model full of gnomes they'd be investigating why.

Fifth, as I have already said, more than once, the distribution of observed AGN redshifts is quite smooth, with no 'Karlsson peaks'

Actually, that issue isn't settled. More than one study (I've referenced several now) have looked at the data and indeed found evidence that quasar redshifts have a tendency towards Karlsson peaks.

Of course, if the physical model you are seeking to test applies to only a tiny subset of observed quasars ...

Haven't you been listening to what I've been saying at all, DRD? I've never made a claim about ALL redshifts being quantized. It is your side that makes claims regarding ALL redshifts ... namely, that they ALL equate to distance and that ALL high redshift quasars are randomly located with respect to low redshift, local galaxies.

Sixth, as has been said many, many times, the kind of a posterori approach you are using for your analysis is invalid ...

This is again a bogus argument. The mainstream has a model that says quasars are randomly distributed across the sky independent of low redshift galaxies. Their model says that the redshift of these objects come from a continuous distribution with no peaks at Karlsson values. The mainstream has supplied estimates for the number of quasars and the number of low redshift galaxies. I've made some additional (conservative) assumptions about the distribution of those quasars with respect to those galaxies. Using all the above, I've then calculated the probability that we'd see certain observations that indeed we have already seen. The results can be viewed a prediction (made long ago) by the very nature of your model. So all I'm doing, therefore, is demonstrating that your model failed that prediction. I am dealing cards from a deck ... your deck ... and not expecting to see certain hands in a limited number of deals ... in fact, probabilities derived using your model indicate I shouldn't expect to see those hands even if I deal until the cards wear out. Yet I do. Your model therefore fails the test. And this argument of yours about posteriori and a priori is nothing more than a smoke screen to help you avoid that glaring fact.
 
If it's the 'mainstream' understanding of how 'quasars' are distributed, then the numbers chosen as inputs need to be replaced by estimates of AGNs

Wrong, because you've only ASSUMED that quasars are AGNs whose black hole have turned on. Otherwise you can't explain quasars at all. I'm saying that assumption is clearly wrong if the redshifts are indeed quantized (and you know full well what I mean by that term). I'm saying that assumption is clearly wrong if there are indeed high redshift quasars that are on this side of low redshift galaxies ... as also seems to be clear in some observations.

Further, the approach in this post contains no 'null test'; for example, in addition to a (very) small number of large, bright (active?) galaxies, there should be at least an equal number of other large, bright galaxies (perhaps a range of galaxy types, including ellipticals, lenticulars, spirals, irregulars; dwarfs and giants; merging and colliding galaxies; ...).

Actually, your example is completely irrelevant to the calculations I made. Nor if I did that would it help your case. I didn't throw out any possible quasar/galaxy associations because their galaxies were not of the type in the cases I examined. If I had, the probability of seeing those cases would be even lower.

In addition, if the distribution of redshifts of 'quasars' in the selected subset does not match that of the population (to within the relevant statistic), then by definition it is not a random selection.

By subset, I presume you mean the specific galaxies for which I did calculations? That you make this statement demonstrates that you STILL don't understand the calculation methodology at all. What am I to do folks? He's the *expert*. :rolleyes:

The method outlined in this post also seems to assume that large, bright galaxies are not clustered in any way; clearly they are.

If you put more galaxies in one location in the sky, yet the quasar locations remain random with respect to those galaxies, that doesn't improve the probability of seeing galaxies with more quasars. It probably lowers it. Or simply makes no difference in the larger picture because you've lowered the probability of seeing large quasar/galaxy associations due to the rest of the quasar sample. Your argument is again superfluous. Irrelevant. And by "clustered" how far apart are they from our viewing angle? Show us how this would specifically affect my calculation because, frankly, I think you are handwaving ... throwing out red herring.

I think the 'low redshift galaxies' numbers (etc) are also incorrect

Well, you are going to have to be a little more specific. :)
 
BAC, would you mind taking a few minutes to write down what it is that you (or the model you are interested in) expect to find, in the vicinity of ('near') 'low redshift' 'galaxies'?

Well that depends on how many low redshift galaxies we look at. Right? It has to be expressed probabilistically. Right? And I've done that. I've calculated the probability ... with what I believe to be mainstream model assumptions, mainstream observations regarding quasars and low redshift galaxies, and some (what I believe to be) conservative assumptions about quasars placement relative to low redshift galaxies ... of seeing certain observations. Since those probabilities were <<1 ,even if all the possible quasar/galaxy associations in the sky were presumably looked at, I surely wouldn't expect to see them if only a small fraction of possible quasar/galaxy associations had been looked at. Would you?

For example, do you expect that 'all' 'quasars' 'near' the 'minor axis' of such galaxies will have redshifts that are 'near' one of the 'Karlsson peaks'?

No, and I think I've said that. In fact, some of those specific galaxies I did calculations for had quasars that weren't near Karlsson peaks. That fact is accounted for in those calculations.

One of my reasons for asking these questions is that you have to hand all that you need to test the model ... SDSS DR6 contains consistent, high quality data on an awful lot of 'low redshift galaxies' and 'quasars', perhaps enough to test your model far, far more extensively than can be done with just a handful of objects selected from papers by Arp et al.

Probably so. But I'm not an astronomer so it's not my job to do that. You are. Right? :D
 
This is again a bogus argument. The mainstream has a model that says quasars are randomly distributed across the sky independent of low redshift galaxies. Their model says that the redshift of these objects come from a continuous distribution with no peaks at Karlsson values. The mainstream has supplied estimates for the number of quasars and the number of low redshift galaxies. I've made some additional (conservative) assumptions about the distribution of those quasars with respect to those galaxies. Using all the above, I've then calculated the probability that we'd see certain observations that indeed we have already seen. The results can be viewed a prediction (made long ago) by the very nature of your model. So all I'm doing, therefore, is demonstrating that your model failed that prediction. I am dealing cards from a deck ... your deck ... and not expecting to see certain hands in a limited number of deals ... in fact, probabilities derived using your model indicate I shouldn't expect to see those hands even if I deal until the cards wear out. Yet I do. Your model therefore fails the test. And this argument of yours about posteriori and a priori is nothing more than a smoke screen to help you avoid that glaring fact.

Hi BeAChooser.
A low probability of an event (e.g. of a royal flush in dealt cards) does not mean that the event will never happen nor does it mean that it can only happen after a large number of tests. What it means is that each time you do a test (deal the cards) there is a probability that the event will happen. So you can get a royal flush on any deal - the first, second or millionth. Likewise a low probability of QSOs aligned with the minor axis of a low redshift galaxy does not mean that you will not see the alignment until you do an enormous number of observations. You can see the alignment on the first, second or millionth observation.
 
BeAChooser wrote: And don't answer 1.0 because that's not an answer ... that's avoiding the question. Do an analysis like mine to calculate the probability of those observations. You can do that, can't you?

Well, the answer is, as you have already stated, 1 ... because they are observed to be there.

An answer which demonstrates you are either deliberately avoiding the question because you know the answer doesn't help your case or demonstrates you don't understand a VERY simple concept where probability analysis is concerned. :)

In fact, I'd challenge any reader of this thread to repeat the kind of 'probability estimates' you have come up with, using a different set of input data ....

Wow! Insulting our readers. Come on DRD ... you don't think our readers are capable of

- coming up with a random set of r values of z between 0.00 and 3.00 to replace the ones I used as observations in one of the calculations I did?

- finding the difference between those z values and the nearest Karlsson values to each?

- doubling those differences to get a set of increments?

- dividing 3.0 by those increments to get a set of n?

- finding the probably with this formula: P = r! * (1/n1)*(1/n2)* ... snip ... * (1/ni=r) ?

- Comparing that probability to the one I got for that case using the observed z?

You should have more faith in their and your abilities. :D
 
Hiya BAC, i have two sets of double numbers from 1-100
A set is causal and would be noticed as such over a large trials of generation, a set is random (both values are just (x,x) with x=(1-100))

I assure you that one set is very biased and weighted. I will show the generation in two days.

Here they are in random order

Set 1
(56,81)(15,77)(53,8)(29,23)(54,60)(53,34)(63,71)(64,57)(38,66)(56,16)

Set 2
(10,78)(60,18)(53,77)(74,15)(27,80)(63,80)(32,61)(9,79)(10,40)(8,56)

Which set is random and which set is weighted, if a set is weighted, what is the algorithim?
 
Which leads right back to the question I asked ... if nearly 9000 quasar-galaxy pairs, involving ~70 galaxies, are inadequate for L-C&G to show a minor-axis anisotropy, what is it about ~20 (40?) quasar-galaxy pairs, involving ~4 galaxies, that makes your a posterori selection statistically valid?

First of all, you misrepresent the report. The report doesn't claim there are 9000 quasars ... just 9000 QSO-galaxy pairs. That might correspond to less than 9000 quasars if any of the 1 deg radius circles around each of the 71 galaxies overlap so that quasars can be paired with more than one galaxy. If the area around each galaxy searched is PI*12 = 3 deg2 and none of those circles overlap, that's 213 deg2. Now the SDSS surveyors said that the average density was about 15 quasars per square degree. So we'd expect to see about 3195 quasars in that much area. That suggests there are overlaps. And if there are on average 15 quasars per deg2, and most of the alignments I've studied occur over distances only a fraction of degree in diameter, the number of quasar/galaxy associations in that sample like the ones in the observations I've studied (with r's of 4 to 8) must be very, very small. Perhaps there isn't even a case like that in the sample? So perhaps that sample isn't a fair representation of the population as a whole? Which leads right back to the questions I ask you. What makes you sure that a small group of galaxies with associated quasars would contain enough cases with 4 or 5 or 6 aligned quasars to show up at all if the probabilities I've calculated for such alignments are accurate? Or if one or two did, what makes you sure the statistical approach wouldn't mask the effect? Furthermore, what makes you think that minor axes are the only alignments that quasars can have with respect to galaxy features? Perhaps some of those other alignments are more important but also harder to detect because the features are more difficult to detect than minor axis alignment?
 

Back
Top Bottom