Arp objects, QSOs, Statistics

NGC 3516 and its quasars ...

It's ~a decade since the Chu et al. paper was published, and no doubt many more quasars (and galaxies) 'near' it have been discovered.

Based on Arpian ideas, what do you predict the redshifts of those 'newly discovered' quasars (and galaxies) will be BeAChooser? Where will they be, in relation to NGC 3516?

And what would you predict for the next ten years, and the ten years after that, and ...?
 
Sorry BAC, the first part of my answer ran a little long, but it is releavnt to how a large number of samples and sample sets is needed to determine association.

David, the coin toss samples are completely independent events drawn from a process with the exact same probability of producing a head every single time. Do you think that the likelihoods of quasar/cluster arrangements and redshifts are completely independent of one another?
Now you are using apriori arguments to show your use of a posteriori argument.

The assumption that is the null assumption is that there is random distribution of the objects, that in other words there is no pattern before hand.

And remember that random does not mean evenly distributed. You can take a matrix of 1,000 x 1,000 dots and randomly place one hundred objects in this using the pseudo random generator of you choice or six ten sided dice, or what ever means you want for randomly placing dots in the 2d matrix.
Then place a number of dots in the matrix, clear and place, clear and place, say ten dots, one hundred dots and one thousand dots in three separate runs.

In the ten dot placement the chances of each dot being in a particular coordinate position are very low, .00001, for the 100 dot placement they are still low .0001, in the thousand dot matrix they are still low .001.

Yet here is the thing, patterns can arise from a totally random system, even in the ten dot system you could have dots next to each other from a random process.

Say that a dot is at coordinate position (X,Y), in the ten dot placement what is the chance that if you have a dot next to that dot but one to the left (X+1,Y), the chance does not change it is still .00001, it does not go up or down because there is a dot at (X,Y), the probability for each of the spots next to the placed dot is .00001, and there are eight space that are adjacent for the placed dot, each of the adjacent spaces has the exact same chance of placement .00001. Regardless of any prior placements. Now you can look at each possible combination of ten dots in the matrix which is a very high number.

Which is a really big number.


But that does not mean that a particular configuration is more or less likely than any other configuration.

A specific placement of all ten dots in a line has exactly the same chance of occurrence as any other the possible configurations. That is why it is random.

So there is an equal probability of pattern of distribution amongst all the configurations. A pattern with the dots in a line is no less likely that a pattern with the dots dispersed, they are equally likely.

So when you have a hundred dots and a thousand dots, you can get all sorts of patterns and associations, but they are still arising from a totally random process.

What you would have to do it study large numbers of configurations to determine if there is a random distribution or a weighted distribution of some sort.

Say we have two algorithms for determining dot placement.
1. Is random there is no weighting or bias to the distribution.
2. Is weighted in that there is a small possibility that a square next to a dot will receive another dot at a higher rate. Say that the algorithms places dots on the following basis, for each dot after a dot had been placed there is a 10% chance this will be a biased dot, it will be placed near an existing dot within a three square radius, randomly determined for radial distance from the originating dot and which is the originating dot. (So first there is a 1/10 probability of being a biased dot after dot n=1 is placed, then if it is a biased an originating dot is randomly chosen from all existing dots, the biased dot is then placed within three radial square of the selected dot.)


Now say that you are given one configuration for each of the two algorithms, are you going to be able to say which one is from a random placement and which one is from a weighted placement. Not very likely, you do not have enough samples to really tell.

The ability to determine a random placement from the weighted distribution requires multiple samples generated by the algorithms. For a dual sample size of 1 Sn=1, it will be impossible to tell the two apart, because you can not determine a distribution pattern for the algorithm.

It is only with much larger Sn, such as 100, 1000, 10000 that you would be able to detect the difference between the two algorithms with any sort of accuracy.

This is counter intuitive, I understand that, but I am discussing real world things here. With a Sn=1 it would really be hard to distinguish if there was a difference in the algorithms, it is only as Sn gets to be very high that a determination could be made that algorithm number two has a weighted distribution.

Both will exhibit patterns, it is only from the comparison of a large Sn that a determination could be made with any accuracy.

So, this is very comparable to determining if QSO placement is random or not, a visible pattern is equally possible in a single configuration. So a limited SN, say of 25 is not going to give you enough data to say that there is a weighted distribution.

It is only be comparing large Sn, and looking at the patterns between configurations that the weighting will become visible. In that way by having a large SN, the average distance between dots will be noticed to be different between the two algorithms. It will not be apparent except for when the Sn is high enough.
 
Last edited:
Hiya BAC,

The first post ran long but I hope I made things a little clearer, the questions is how do you decide that there is a difference between your theorized effect and a random distribution, and it is the plague of many a science.

Back to your post, and thanks for the dialogue.
...
When you identify a sample in the population by naming it, don't you remove it from the population for drawing the next sample?
Well this is going to sound goofy because it depends upon the sampling technique you use.

In a total census survey, say something like what the US government does there is an effort to sample every member of the population. And so you do not want to repeat a survey in that case, and it is good practice in all cases.

However it is rather more complex than that and a real bugaboo for things like ‘political opinion surveys”.

The problem is that you may want to know the opinion of 300 million people but you most likely do not want to pay for it. Or you want to estimate combat death in a war which is very risky. So you try to take a representative sample of the population.

So say for the political opinion survey, you want to decide what you sample size is going to be, and because of cost the sample never even approach 1% of the population. So what you try to do is make sure that you sample of surveys or the survey itself reflects the demographic of the country as a whole . (Which is a huge sticking point and real source of sampling bias).

So what you want to do is take a sample of the country and match it to the demographic makeup of the country.

Again not as easy as it seems, if you are going to survey 2,000 people, then you don’t want to just call 40 people in each state, you want the numbers called to reflect the actual population density. So you start to have to use other surveys before you even start your survey. Then you want to match socio economic status , education, employement and a host of other variables. So again more surveys before you even survey, if you really want a decent sample.

This is not what is done, there are two methods and plenty more but the first is to pretend that you are getting a random sample by just calling two thousand people on the phone, randomly from your data base and pretending that it matches the demographic of the country.

The other one is to find ‘representative markets’, say a town that does reflect the demographic of a region, and you sample randomly in those ‘markets’ based upon the idea that they do accurately reflect the region they are meant to represent.

Now in the matter at hand, the QSOs, the best method is to determine the standard or normative values of chosen populations and subpopulations, which is what I have suggested.

You should not repeat a member in different samples, so as you suggest a member is removed from the other sample sets.

But a thousand of each of the following would give you a huge sample set to begin to look at
-‘normal’ galaxies
-AGN galaxies
-old galaxies
-young galaxies
-random spots on the sky

Then you can learn certain things, like what is the areal density of QSOs in areas that are not deliberately associated with galaxies (the random points) this is the first normative sample that you hope gives you your base line ‘noise’ IE random levels of QSO statistics.

The others would be chosen as sub[populations that could really make or break the case for Arp’s association.

What you would be looking for is that the level of QSOs associated with Arp objects rises one or more standard deviations above the noise level, the normative value for the random spot on the sky.

So yes you want to not have the same area or galaxy represented twice in your survey. But that does not mean you can say that the QSO association is 10^-6 probable because you don’t really know what the distribution of QSOs is, and you need to compare that to a representative sample of the QSO population, then you can say that there is something above noise level or that larger samples are needed to determine an effect, especially if it comes close to the standard deviation.

Then you take larger representative samples and larger study object samples hopefully to clarify the data.

They are of course a huge source of potential bias but it is cost versus representation.
 
.
Specifically, if the authors of a catalogue explicitly state their catalogue should not be used for statistical analyses, and someone proceeds to do just that, what degree of credibility do you think should be given to that someone's paper?


Hi DRD, my brain is now up to speed, i would say that you would have to be very careful to aknowledge what ever inconsistancies or rational led the original researchers to say that thier survey should not be used in that fashion, and so it would be used with great care and caution.
 
Last edited:
Sorry BAC, I hope I haven't bored you to death, here is a third lenthy post.

Look at it this way, if you have a field of walnut shells with one pea under one of the shells there is a certain probability that you will find a pea if you lift a shell. If that shell contains the pea, do you think the probability of finding a pea if you lift another shell is the same? Apparently so.
This example is not relevant to the issue at hand, in this case it was stated by you that there is one pea under one shell.

So I don’t see how that is analogous.

I can use something you mentioned earlier, thirty quasars per a field so in this case we will say the field is ten by ten , and that there are thirty peas under the shells and that each shell may only have one or no peas.

So the peas are randomly distributed correct?

You pick up a shell but before you do, what is the probability that there is a pea under it?
30/100 or .333333, do we agree?

What is the probability that there is a pea under the next shell that you pick?
It is still the same, despite the logic that some people use. Prior to examination the shell always has a 1/3 chance of having a pea under it. No matter how many shells you have lifted and how many peas you have found. The statistic does not change. It is counter intuitive and goes contrary to the Deal of No Deal sort of logic.

So you have lifted a shell and there is a pea under it, what is the chance that if you lift the shell one to the left that it will have a pea under it? 1/3
And so on, no matter that you chose the shells in a straight row across the field, each shell will always have a 1/3 chance of having the pea under it.

Now you say “But I have turned over 29 shells and there is only one pea left.” So the chance that the pea is under the other 71 shells is 1/71 chance not 1/3. And in that case it is true because you have counted the peas and you know the number to start and how many shells are left.

But what if you were to turn over the ‘next’ shell. And you didn’t know that information. So there you are, you about to turn over shell (5,6) and you say well, I know that it is 1/71 , not one third. That is the Deal or No Deal logic, which is post facto sampling. If you were to have chosen to turn over shell (5,6) before any other shell, then what would the chance be, 1/71 or 1/3 ?

Random distributions say that the chance that a shell will have a pea under it is 1/3, you are using post facto information to change you knowledge of the distribution. The probability remains the same.

It is a bug bear and very confusing, because we do always adjust out data to support new thinking. And the average distribution won’t help because there will be an average of thirty , so some fields will have 60 peas and some will have zero peas.


But lets us say that there is really huge field of shells like a million by a million matrix. And we don’t actually know how many peas are out there. And what we want to know is how many peas are out there.

If you want to sample this large field will you turn over all the peas in a row? That is one strategy, or if you were listening to old Ms. Battleaxe in class you remember that you should really move your chosen shells , in case you run into an aberrant cluster of peas that throws your sample off.


Now say that there are apples in and amongst the shells and you decide to start looking at the shells near the apples. And you begin to notice that there are some apples that seem to have a lot of peas near them and some that appear to have no peas near them.

1. What conclusions can you draw about the association of peas and apples? Are there special apples which have more peas near them?
2. How would you decide?
3. If you have only turned over ten thousand shells within a ten shell radius of some apples, do you know what the average density of peas is? You have only sampled 10^5 out of 10^12 shells.
4. There are 300 peas that you have found, can you say that the average occurrence of a pea is 300/10000=3/100 or 3%
5. If you know that you have found an apple that has 4 peas immediately adjacent to it can you say that the random probability of that occurring is .03x..03x..03x.03 or .00000000006561?


I know these questions seem silly and foolish but they are relevant to sampling theory. And it begs the question, do you have a representative sample of peas under shells?
 
Now say that there are apples in and amongst the shells and you decide to start looking at the shells near the apples. And you begin to notice that there are some apples that seem to have a lot of peas near them and some that appear to have no peas near them.

1. What conclusions can you draw about the association of peas and apples? Are there special apples which have more peas near them?
2. How would you decide?
3. If you have only turned over ten thousand shells within a ten shell radius of some apples, do you know what the average density of peas is? You have only sampled 10^5 out of 10^12 shells.
4. There are 300 peas that you have found, can you say that the average occurrence of a pea is 300/10000=3/100 or 3%
5. If you know that you have found an apple that has 4 peas immediately adjacent to it can you say that the random probability of that occurring is .03x..03x..03x.03 or .00000000006561?


I know these questions seem silly and foolish but they are relevant to sampling theory. And it begs the question, do you have a representative sample of peas under shells?

DD, thanks for trying to make these statistical aspects easier for one like me to understand. :)

I like your analogy about the shells, peas and apples.

However, I think that the analogy is flawed in a sense.

3. If you have only turned over ten thousand shells within a ten shell radius of some apples, do you know what the average density of peas is? You have only sampled 10^5 out of 10^12 shells.

You see, if this analogy is directly comparable to the Arp statistics, haven't they always referenced the average density of peas from the SDSS or 2dF catalogs, or similar sources? They aren't looking around the apples to determine the average density, they already have good estimates of that. The surveys have already looked at vast swaths of shells, and seen a number of peas. And these swaths were chosen to provide a good representation of the entire field of shells.

3. If you have only turned over ten thousand shells within a ten shell radius of some apples, do you know what the average density of peas is? You have only sampled 10^5 out of 10^12 shells.

We would know the average density of peas around the apples. We can then compare this average to the average of our "swath" survey, and see if the density around the apples is the same as the "swath" density.

4. There are 300 peas that you have found, can you say that the average occurrence of a pea is 300/10000=3/100 or 3%

Around a apple, yes, we could say the aveage occurrence of a pea is 3%.

It seems as if I have seen other astro papers discussing data reduced in Poisson statistical fashion, I don't remember seeing the use of multiple samples of different types.
 
Wrangler, one thing to remember is that, in any of the surveys (such as 2dF and SDSS), there is an average areal density of 'quasars' (however defined), and also a measure of how that average varies across the sky (a standard deviation, for example), and a measure of how the distribution of averages differs from Gaussian (for example), and so on.

As I mentioned in an earlier post in this thread, one thing which the papers BeAChooser has cited all seem to lack is a recognition that their calculations should include more than just the respective areal averages (much less quantitative work to estimate it, etc).

There are, of course, many other factors and aspects, but they are (all?) beyond the scope of highly simplified example DD is working through.
 
(just one cited paper)

https://ritdml.rit.edu/dspace/bitstream/1850/1788/1/SBaumArticle11-2004.pdf "The host galaxies of luminous quasars, David J. E. Floyd, Marek J. Kukula, James S. Dunlop, Ross J. McLure, Lance Miller, Will J. Percival, Stefi A. Baum and Christopher P. O’Dea, 2006"
.
Thanks BAC, for bringing this paper to readers' of this thread's attention.

There has been some discussion of how we can tell that quasars are at (or near) the distances implied by their redshifts.

This paper, and the earlier ones it cites, provides some detailed material on how - look at the host galaxies of quasars, and see how well they match galaxies which don't have quasars in their nuclei.

This paper also covers some models of quasars and their host galaxies, quasar evolution, and so on.

In other words, it's part of a long-standing effort to build an extensively cross-linked and self-supporting understanding of quasars ... a set of consistent models.

At another level, a good example of how astronomy, as a science, works.
 
DD, thanks for trying to make these statistical aspects easier for one like me to understand. :)

I like your analogy about the shells, peas and apples.

However, I think that the analogy is flawed in a sense.
Well all analogies always are, it is meant to be more a means of opening discussion. And trying to get the discussion to a point where the validity of a posteriori statistics is reached. There is apparently some use of Bayesian models, but I am more familiar with the other model. I will have to do more reading to discuss it with even a little understanding. I am used to population sampling, not belief judgments based upon the data at hand. (It just goes against all my training and biases.)
You see, if this analogy is directly comparable to the Arp statistics, haven't they always referenced the average density of peas from the SDSS or 2dF catalogs, or similar sources?
Yes and no, what I have a problem with the calculation of the odds of an event occurring using the a posteriori method. In the science I am most familiar with (psychology) the use of population sampling is usually a huge part of any discussion. It is often abused without even getting to something like meta analysis. So the use of a posteriori reasoning is just foreign to me.
The issue is that a density determination can not be applied to the probability judgment the way that I have seen some do it here.

To take an average density and then compute the likelihood of an event seems, just not right. You should sample the actual distribution and then try to compare your effect sample to the representative sample. That way you are basing the representative sample upon more observation.

This is like totally huge in certain fields like archaeology and highly resisted and embraced in equal measure. Modern theory says that is you have a site you want to dig at that you should randomly sample the site to see what is really there. Not just do like Schliemann and dig up the one you think looks coolest.
They aren't looking around the apples to determine the average density, they already have good estimates of that. The surveys have already looked at vast swaths of shells, and seen a number of peas. And these swaths were chosen to provide a good representation of the entire field of shells.
Yes but I disagree that would allow you to say that the average density is thus and such and that the probability of a certain arrangement is 10^-6 because of the average density.

That is what the first post about the dots in the matrix is about.

The ten dot placement with all the dots in a line is just as likely as other dispersed arrangement.

The apples is more just trying to get to talking about what Arp, Burbidge and others have done.
We would know the average density of peas around the apples. We can then compare this average to the average of our "swath" survey, and see if the density around the apples is the same as the "swath" density.
yes that is something you could do, but in my example we only know the average pea density in the sample.


So again the best procedure would not to be to assume that the swaths are representative. But to wait until the known data is large enough and take samples from it.

Better would be to do the random point sample I talked about. And to also sample around galaxies.

That would mean you would choose the random points in the walnut field and sample the pea arrangements around them and that you would try to get a sample of representation around the apples as well.

Remember that an average density is just that, an average, so if your average is 3, you can still randomly have samples with 0 and 6 peas respectively. So you can not conclude that a certain arrangement is out of line. With a field area and density average you can not go back and say that an arrangement is X likely.
(Or even worse in some distribution the average could be based upon distribution varying from 0 to 12,000)
To do that the best method would be large sets of representative sampling.
Around a apple, yes, we could say the average occurrence off a pea is 3%.
In our sample the average density is .03 . :)
It seems as if I have seen other astro papers discussing data reduced in Poisson statistical fashion, I don't remember seeing the use of multiple samples of different types.

Well that may be, but it would seem that you and DRD have referenced some that discuss sampling bias.


Which I would say is only valid when the events are rare and a larger sample is not available.

Indicative but not definitive.
 
Wrangler, one thing to remember is that, in any of the surveys (such as 2dF and SDSS), there is an average areal density of 'quasars' (however defined), and also a measure of how that average varies across the sky (a standard deviation, for example), and a measure of how the distribution of averages differs from Gaussian (for example), and so on.

As I mentioned in an earlier post in this thread, one thing which the papers BeAChooser has cited all seem to lack is a recognition that their calculations should include more than just the respective areal averages (much less quantitative work to estimate it, etc).

There are, of course, many other factors and aspects, but they are (all?) beyond the scope of highly simplified example DD is working through.

Yup, no doubt, my example is aimed at one very specific set of methods. The a posteriori model of distribution.

The analogy could be extened to other area (but would become unweildy) like QSO defintion in sampling.
 
Right David. :D

Well BAC it sure seems to me that you don't, unless you are being deliberately obtuse.

See, here is the deal, I have been trained in sampling theory, practiced sampling theory and read a lot of publications regarding sampling theory and research articles using sampling theory. And not until I looked at some stuff did I realize that there are people who DO use a posteriori statistics in trying to find out probability distributions. So in all my reading and research in psychology, mental health, domestic violence and epidemiology which were of professional interest to me, and then is other areas like biology, genetics, astronomy and history: I had never encountered a posteriori use like the Poisson distribution or Bayes theorem until just recently. I knew that they existed from my class work but I didn’t know that there were people who used them in research.

So after years (since 1978) of reading and researching various stuff in the social sciences, public health and epidemiology I am shocked to learn that there are people who use a posteriori probability and I am trying to read about it, but it has never been used in my formal class work, the research I have involved in school and my professional life and the extensive reading that I have done as a professional and an interested person.

So I acknowledge my bias, I acknowledge my instructors and trainers bias, I acknowledge the bias of the researchers I have worked with, I acknowledge the bias of all the researchers whose articles I have read, it is very hard for me to not dismiss the a posteriori use of statistics out of hand.

I am reading and try to wrap my mind around them, but as I said it is hard for me to not just dismiss them out of hand as unreliable and prone to sampling error.
So David, are you ever going to get around to providing me with some numbers (for quasars, galaxies, distribution and the completeness of Arp's survey) so that I can run that calculation over to your liking? :cool:

Well here is the rub BAC, I didn't know that there were people who actually used Bayesian statistics and other a posteriori methods. In all my training, education and reading they are just not used and considered to be totally suspect. So if you want to use Bayes Theorem, I have to really try to pout it in context and cross interpret it to the research bias that I already have.

In all the research I have been involved in and read and been trained in, a posteriori statistics are just ruled out of hand and never used.

So it will be a while before I can make a coherent argument and not just do as my training and bias says and dismiss them out of hand.

So I will ask you in return:

In what areas do you feel that the use of a posteriori statistics is a viable way of analyzing data?

To me it is as though someone has suggested crystal gazing as a way of prospecting for oil, I am trying to read about it and not just react from my bias.
 
Dancing David, please do not take anything in this post as a substitute for BeAChooser's views and understandings of the method(s) he used in his post125 or his understanding of those used in any of the papers by Arp et al. that he has cited.

However, I think it's not clear just how 'clean' the method in BAC's post is, and in particular I think it contains more than just a pure a posterori analysis.

Further, although you would have no difficulty finding (pure) a posterori analyses in early Arp et al. papers (and, it must be said, in papers by other authors three decades or so ago too), their later papers are no longer so obviously awful (except, of course, the Bell one!) ... even the Chu et al. one that BAC based his analysis on is more nuanced than how BAC has presented it.

For a fairly straight-forward example of how statistics are used in modern astronomy, you may be interested in Freedman et al. (2001), "Final Results from the Hubble Space Telescope Key Project to Measure the Hubble Constant". To say that it's a landmark paper would be an understatement; it has already been cited >1200 times!

For an example of much more extensive - and rigorous - use of statistics, try Dunkley et al. (2008?) "Five-Year Wilkinson Microwave Anisotropy Probe (WMAP) Observations: Likelihoods and Parameters from the WMAP data", and some of the papers it references. Fair warning though, some of this is really heavy duty!
 
Dancing David, please do not take anything in this post as a substitute for BeAChooser's views and understandings of the method(s) he used in his post125 or his understanding of those used in any of the papers by Arp et al. that he has cited.

However, I think it's not clear just how 'clean' the method in BAC's post is, and in particular I think it contains more than just a pure a posterori analysis.

Further, although you would have no difficulty finding (pure) a posterori analyses in early Arp et al. papers (and, it must be said, in papers by other authors three decades or so ago too), their later papers are no longer so obviously awful (except, of course, the Bell one!) ... even the Chu et al. one that BAC based his analysis on is more nuanced than how BAC has presented it.

For a fairly straight-forward example of how statistics are used in modern astronomy, you may be interested in Freedman et al. (2001), "Final Results from the Hubble Space Telescope Key Project to Measure the Hubble Constant". To say that it's a landmark paper would be an understatement; it has already been cited >1200 times!

For an example of much more extensive - and rigorous - use of statistics, try Dunkley et al. (2008?) "Five-Year Wilkinson Microwave Anisotropy Probe (WMAP) Observations: Likelihoods and Parameters from the WMAP data", and some of the papers it references. Fair warning though, some of this is really heavy duty!

Sure and thanks, like I said it just blows my mind that someone would use Bayesian statitics. Now if you only have three events to evaluate then it makes sense. But I am trying to be polite about it. I have issues with p-certainty when sample sets are small as well. So it is not as though I just substitute one for the other. My favorite per peeve is when some political survey mentions a 'margin of error' which usually refers to the sampling statistic of repetition, not actual representation.

But Bayesian statitstics just blow my mind. What I have read about them does not seem to indicate they have much use except when the data involves very limited samples.
 
Thanks DRD, that first paper is amazing, almost every paragraph refers to current or past research and they discuss the rationales of thier methodology very acrefully. There is some wonderful stuff about teh statitics they use and the built in probles.

And that was just the first five pages, they seem to address all the confounding factots that they can. My brain needs a rest now.

From page six , here is an example of what makes this paper a joy to read. (And I was skimming ahead, it looks rather juicy and tasty in many places.) From page six
http://arxiv.org/abs/astro-ph/0012376

Since each individual secondary method is likely to be affected by its own (independent)
systematic uncertainties, to reach a final overall uncertainty of ±10%, the numbers of
calibrating galaxies for a given method were chosen initially so that the final (statistical)
uncertainty on the zero point for that method would be only 5%. (In practice, however,
some methods end up having higher weight than other methods, owing to their smaller
intrinsic scatter, as well as how far out into the Hubble flow they can be applied – see
§7). In Table 1, each method is listed with its mean dispersion, the numbers of Cepheid
calibrators pre– and post–HST, and the standard error of the mean. (We note that the
fundamental plane for elliptical galaxies cannot be calibrated directly by Cepheids; this
method was not included in our original proposal, and it has the largest uncertainties. As
described in §6.3, it is calibrated by the Cepheid distances to 3 nearby groups and clusters.)
The calibration of Type Ia supernovae was part of the original Key Project proposal, but
time for this aspect of the program was awarded to a team led by Allan Sandage.

So while they aknowledge error , it looks as they go out of thier way to talk about it, marvelous.
 
Last edited:
DD and DRD make some good statistical sense, IMHO.

How hard would it be for Arpians to take a look at the ~30 bright galaxies with quasars associated, and then look at another ~3o bright galaxies and examine the quasar distribution around them.

Wouldn't that at least go in the right direction, if not being ideal in terms of n samples?

That way, they could begin some tentative steps towards evaluating possible errors in their work via multiple statistical methods, which a number of these other papers demonstrate.
 
DD and DRD make some good statistical sense, IMHO.

How hard would it be for Arpians to take a look at the ~30 bright galaxies with quasars associated, and then look at another ~3o bright galaxies and examine the quasar distribution around them.

Wouldn't that at least go in the right direction, if not being ideal in terms of n samples?

That way, they could begin some tentative steps towards evaluating possible errors in their work via multiple statistical methods, which a number of these other papers demonstrate.
.
Wrangler, as I understand the Arpian idea, it's both very loose and very tight ...

Part of the problem is 'what is a quasar?' - if 'quasars' are AGNs, just like in mainstream astronomy, then it should be both very, very easy to do a test somewhat like what you describe ... and also almost impossible.

If there were a clear, quantitative relationship concerning redshift, ejector, and speed of ejectee (with respect to the ejector), then it would be very simple ... and, no doubt, long since confined to the dustbin of history.

But it's anything but; in fact, if you read the Arpian papers over the past few decades carefully, you may get intensely annoyed ... it's almost as if they are being deliberately vague and obscure, leaving as much wriggle room as possible, and comitting to nothing.

For example, it used to be that all quasars were ejected from the likes of Seyfert galaxies, or other 'active' galaxies. Then it became galaxies that may have been, once, active, but are no longer. It was also, once, that the ejectees were on a linear, no-room-for-fudging path: decreasing redshift with distance, increasing maturity (read less like a quasar) with redshift, etc, etc, etc. Now, apparently, it's all up in the air, with nothing concrete.

Too, there was a time when all redshifts had to fit the Karlsson formula (or some variant); now they don't.

Whatever.

Consider this though: if it's variable mass, and if SR rules, then how come we don't detect protons (etc) with differing masses in the flood of cosmic rays? After all, we now know that a not insignificant fraction of CRs come from distances that are much greater than those to the nearest 'quasar ejecting active galaxies' (M82 is the nearest, I think).

Or how come these ejected quasars never seem to create wakes in the inter-galactic medium through which they must travel, especially in rich clusters? After all, such wakes are now well detected, from their radio and x-ray signatures.

And so on.
 
DD and DRD make some good statistical sense, IMHO.

How hard would it be for Arpians to take a look at the ~30 bright galaxies with quasars associated, and then look at another ~3o bright galaxies and examine the quasar distribution around them.

Wouldn't that at least go in the right direction, if not being ideal in terms of n samples?

That way, they could begin some tentative steps towards evaluating possible errors in their work via multiple statistical methods, which a number of these other papers demonstrate.


Regardless of Arps sample, it is the effect sample and is permitted to be very small, I would want the normative groups to be very high, ten thousand if possible or a thousand at least. And it should include both galaxies and random spots on the sky.

I suggest AGN's because if Arp is right they should show a higher than average, like the Arp galaxies. I suggest young galaxies and old galaxies for the same reason, there should be a differential, ithere should be a difference if Arp's model is correct.

I also agree with DRD, there should be some other measures that would say that these objects are ejected and aquiring mass, but the mechanism is beyond me, so I don't know what signatures we would see...
 
WARNING! This is a gratuitous bump!

In case BeAChooser is still with us, and reading this thread: you have not been forgotten!

I, DeiRenDopa, am patiently waiting for you to return and answer the (many) open questions about the stuff that you posted here (no doubt there are others also waiting ...).
 
.
But that doesn't rule out the possibility that the two phenomena are connected ... especially if it turns out the high redshift x-ray emitting object near galaxies are improbably quantized and distributed in a pattern matching the theory that Arp, Narlikar, et al have proposed for their origin and behavior.
.

I'm still not at all clear on what you're trying to say here; would you mind clarifying please? Specifically, I am not aware of any "theory that Arp, Narlikar, et al have proposed" which quantitatively accounts for "high redshift x-ray emitting object near galaxies [...] improbably quantized and distributed in a [specific] pattern".

Where is that theory published? Where are the specific, quantitative behaviours explicitly derived from that theory?


I suggest you look up all the papers and books that Narlikar and his associates have published. There seem to be dozens. :)

And I found this recently. It might be of some interest to you:

http://arxiv.org/pdf/physics/0608164 "A Proposed Mechanism for the Intrinsic Redshift and its Preferred Values Purportedly Found in Quasars Based on the Local-Ether Theory Ching-Chuan Su Departmentof Electrical Engineering National Tsinghua University Hsinchu, Taiwan, August 2006, Abstract – Quasars of high redshift may be ejected from a nearby active galaxy of low redshift. This physical association then leads to the suggestion that the redshifts of quasars are not really an indication of their distances. In this investigation, it is argued that the high redshift can be due to the gravitational redshift as an intrinsic redshift. Based on the proposed local-ether theory, this intrinsic redshift is determined solely by the gravitational potential associated specifically with the celestial object in which the emitting sources are placed. During the process with which quasars evolve into ordinary galaxies, the fragmentation of quasars and the formation of stars occur and hence the masses of quasars decrease. Thus their gravitational potentials and hence redshifts become smaller and smaller. This is in accord with the aging of redshift during the evolution process. In some observations, the redshifts of quasars have been found to follow the Karlsson formula to exhibit a series of preferred peaks in their distributions. Based on the quasar fragmentation and the local-ether theory, a new formula is presented to interpret the preferred peaks quantitatively." :)

BeAChooser, in one way I should thank you; what you wrote is good resource material for my study on why threads like these are often so long.

Oh ... you doing a study on that? Out of curiosity, who is funding your study? Or is that just a personal interest of yours? Is there something about long threads that you don't like? Do they irritate you? Will you be publishing this study in some journal? :)

The last is, of course, just what a troll does, and I note that many JREF forum regulars have called you just that. If that's so, then it's an obvious conclusion - one reason why threads like this are so long is that people keep feeding the trolls.

So is that how you are going to escape explaining the improbabilities surrounding NGC 3516, NGC 5985 and NGC 3628? Label me a "troll" and walk away (so you don't feed the troll)? Maybe I'll do a study on people who use that adhominem as a means of avoiding the issues. :)
 
.
1. http://heasarc.gsfc.nasa.gov/W3Browse/all/veroncat.html
.

1. Part of the formal training that astronomers receive (or should receive) includes what catalogues are and how they can (and cannot) be used. If you actually use data from a catalogue, not only must you cite it, but any reviewer of your preprint is expected to have familiarity with the catalogue, and should point out when you are using it for a purpose for which it is not suitable.

And how exactly does that address NASA's use of the catalog on their website? I see no mention of the warning that it is not to be used for statistical analysis. Was NASA derelict in not reprinting it?

.
.
2. http://arxiv.org/pdf/astro-ph/0611820.pdf "Photometric Selection of QSO Candidates From GALEX Sources, David W. Atlee
and Andrew Gould, 2007"
.

2. Seems to be a perfectly OK use of VCVcat; how did you read this to be otherwise?

Well perhaps that depends on how you define "statistical purposes". Let me quote some excerpts that to me sound like statistical analysis using Veron-Cetty data:

"In addition to our color criteria, we find that the number of GALEX-USNO matches drops rapidly for R ? 19.6, and we therefore limit our catalog to R ? 19.5. Finally, by comparing the Galactic-latitude distribution of our QSO candidates to QSOs found in the Veron catalog (Veron-Cetty 2006), we find that we have very little sensitivity for |b| < 25 ? and therefore do not search for quasars below this limit. We believe this low sensitivity is due to heavy extinction in the FUV -band."

"We match our candidates to the Veron QSOs and 2MASS point sources via the VizieR search engine, requiring that the distance from the matched source to the candidate be less than 5??."

"From the 2692 sources in XMMSL1 (Freyberg et al. 2006), we find 20 that match our QSO candidates. The total probability of these matches predict that ten should be genuine QSOs; ten of these candidates appear in Veron-Cetty (2006), with little information for most of the others. One of candidates identified as a QSO in Veron-Cetty (2006) (USNO 1275-07898737) is unusual; it has an SDSS spectrum but cannot be classified by the automated pipeline."

.
3. http://www.journals.uchicago.edu/doi/abs/10.1086/379006 "On the Cross-Correlation between the Arrival Direction of Ultra–High-Energy Cosmic Rays, BL Lacertae Objects, and EGRET Detections: A New Way to Identify EGRET Sources?, Diego*F.*Torres, Stephen*Reucroft, Olaf*Reimer, Luis*A.*Anchordoqui, 2005"

5. http://www.auger.org/technical_info/...706.1715v1.pdf "Search for correlation of UHECRs and BL Lacs in Pierre Auger Observatory data, Diego Harari, 2007"
.

3. and 5. I haven't read the papers themselves, but the use of VCVcat seems OK for the purposes stated; how did conclude it was not?

So you haven't read the papers but yet you can state that the use is ok for the purposes stated? :rolleyes: Again, I shall just quote some excerpts that certainly seem to indicate VCVcat data being used for statistical analysis.

From source #3:

"In Figure 1, we plot the position on the sky in galactic coordinates of both the UHECRs and the selected BL Lac objects. There are no positional coincidences between these two samples up to an angular bin greater than 5L Lac objects and any UHECR data set with 33 entries to be Poisson with a mean value of ?4.06. Taking the data at face value, this implies a 2 j deviation effect." From the caption of Figure 1 - "The stars stand for the 22 BL Lac objects from the 9th edition of the Veron-Cetty and Veron (2000) Catalogue of Quasars and Active Galactic Nuclei, with redshifts z > 0.1 or unknown, magnitudes m < 18 and radio flux at 6 GHz (F6 > 0.17 Jy)."

"In a series of recent papers, Tinyakov & Tkachev (2001, 2002, 2003) claim a correlation between the arrival directions of UHECRs and BL Lac objects, a subgroup of the QSO sample previously considered. Specifically, the BL Lac objects chosen were those identified in the (9th edition) Veron-Cetty & Veron (2000) Catalogue of Quasars and Active Galactic Nuclei, with redshifts z > 0.1 or unknown, magnitudes m < 18 and radio flux at 6 GHz (F6 > 0.17 Jy). Only 22 objects fulfill such restrictions. In this analysis, there is no buffer against contamination by mismeasured protons piled up at the GZK energy limit. The cosmic-ray sample of Tinyakov & Tkachev consists of 26 events measured by the Yakutsk experiment with energy greater than 10^^19.38 eV (Afanasiev et al. 1996) and 39 events measured by the AGASA experiment with energy greater than 10^^19.68 eV (Hayashida et al. 2000). The evidence supporting their claim is based on six events reported by the AGASA collaboration (all with average energy <10^^19.9 eV) and two events recorded with the Yakutsk experiment (both with average energy <10^^19.6 eV), which were found to be within 2

"On a similar track, Gorbunov et al. (2002) claimed that a set of gamma-ray–loud BL Lac objects can be selected by intersecting the EGRET and BL Lac object catalogs. The only requirement that Gorbunov et al. considered for a BL Lac object to be physically associated with an EGRET source is that the angular distance between the best estimated position of the pair does not exceed 2R95, where R95 is the 95% CL contour of the EGRET detection. Their claim was based on a positional correlation analysis (using the doubled size for EGRET sources) between the third EGRET catalog (3EG; Hartman et al. 1999) and the objects identified as BL Lac in the Veron-Cetty & Veron (2000) catalogue."

From source #5:

"We test previously reported correlations between UHECRs and subsets of BL Lacs. Note that we test the physical hypothesis of correlation with a particular class of objects at a given angular scale and above a given energy threshold, but the collections of candidate sources are not identical to those in the original reports, because the sky observed by the southern Pierre Auger Observatory is different, and has only a partial overlap. ... snip ...
Test A: 22 BL Lacs from the 9 edition of the catalog of quasars and active nuclei [10], with optical magnitude m < 18, redshift z > 0.1 or unknown, and 6 cm radio flux F6 > 0.17 Jy. ... snip ... Test B: 157 BL Lacs (76 in the f.o.v.) from the 10 th edition of [10] with m < 18. ... snip ... Test D: 204 confirmed BL Lacs (106 in the f.o.v.) from the 10 th edition of [10] with m < 18. Subclasses: a) 157 BL, b) 47 HP. ... snip ... The determination of the statistical significance with which our measurements exclude the hypothesis that the signal present in the HiRes data set (case D) is due to correlations with BL Lacs is a delicate issue. ... snip ... Reference [10] M.-P. Veron-Cetty and P. Veron. A catalogue of quasars and active nuclei. 9th edition: ESO Scientific Report No. 19, 2001; 10th edition: Astron. & Astrophys. 374:92, 2001; 12th edition: Astron. & Astrophys. 455:773, 2006."

That last is very clearly a statistical analysis based on Veron-Cetty data. How would you describe it if it's not?

.
4. http://aps.arxiv.org/pdf/astro-ph/0703280 "Quantum Vacuum and a Matter - Antimatter Cosmology, Frederick Rothwarf and Sisir Roy, 2006"

4. This is a v3 preprint; I expect a reviewer doing her job right would suggest tightening the language a bit; however, the use of VCVcat isn't as so crazily wrong as it was in Bell's paper

The warning regarding VCVcat 12th Edition was not that it's "crazily wrong" but potentially incomplete. And very clearly, the earlier editions of the catalog were far more incomplete than the one Bell used. So perhaps we should throw out as garbage ANY work that relied on any VCVcat edition? Right?

.
6. http://www3.interscience.wiley.com/c...TRY=1&SRETRY=0 "Automated spectral and timing analysis of AGNs, F. Munz, V. Karas, M. Guainazzi, 2006"
.

6. Link didn't work

Curious, it works for me. Oh well.

.
7. http://www.saber.ula.ve/db/ssaber/Ed...pers/isamp.pdf "Dynamic multiple scattering, frequency shift and possible effects on quasars astronomy, Sisir Roy, Malabika Roy, Joydip Ghosh, Menas Kafatos, 2007"
.

7. If the abstract is a fair guide to what's in the paper, then it looks like garbage (i.e. misuse of VCVcat for a purpose it is explicitly unsuitable for)

One down ...

.
8. http://209.85.173.104/search?q=cache...nk&cd=27&gl=us "A Bar Fuels a Super-Massive Black Hole?: Host Galaxies of Narrow-Line Seyfert 1 Galaxies, Kouji Ohta, Kentaro Aoki, Toshihiro Kawaguchi and Gaku Kiuchi, 2006"
.

8. Seems to be a perfectly OK use of VCVcat; how did you read this to be otherwise?

Excerpts:

"We present optical images of nearby 50 narrow-line Seyfert 1 galaxies (NLS1s) which cover all the NLS1s at ... snip ... known at the time of 2001. ... snip ... With these images, we made morphological classification by eye inspection and by quantitative method, and found a high bar frequency of the NLS1s in the optical band ... snip ... The sample size is 2.6 times larger than that of NLS1s (19) studied by Crenshaw et al. (2003). Most of the present sample were taken from “a catalogue of quasars and active galactic nuclei, 10th edition” (Veron-Cetty & Veron 2001) ... snip ... For each object, we calculate an optical continuum luminosity ... snip ... , a black hole mass ... snip ... , and an Eddington ratio, and the values are listed in Table 1. The optical continuum luminosity is derived from the B magnitude given by Veron-Cetty & Veron 2003)"

That looks to me like they were doing some form of statistical analysis based on Cetty-Veron data too ... but maybe you, being a trained astrophysicist will tell us otherwise. :)

.
9. https://ritdml.rit.edu/dspace/bitstr...cle11-2004.pdf "The host galaxies of luminous quasars, David J. E. Floyd, Marek J. Kukula, James S. Dunlop, Ross J. McLure, Lance Miller, Will J. Percival, Stefi A. Baum and Christopher P. O’Dea, 2006"
.

9. Seems to be a perfectly OK use of VCVcat; how did you read this to be otherwise?

Excerpts:

"THE QUASAR SAMPLE The sample was selected from the quasar catalogue of Veron-Cetty & Veron (1993) and comprises two subsamples, both confined to the redshift range 0.29 < z < 0.43 ... snip ... These two samples allow us to explore an orthogonal direction in the optical luminosity - redshift plane, in contrast to our previous HST studies of quasar hosts (McLure et al. 1999; Kukula et al. 2001; Dunlop et al. 2003) which concentrated on quasars of comparably moderate luminosity (MV > ?25), but spanning a wide range in redshift out to z ? 2 (Fig.1)."

"Quasars in the current study. J2000 co-ordinates were obtained from the Digitised Sky Survey plates maintained by the Space Telescope Science Institute. Redshifts and apparent V magnitudes are from the quasar catalogue of Veron-Cetty & Veron (2000)"

Certainly looks like they did some form of statistical analysis. And yes, those refer to an earlier edition of VCVcat. But my comment about that stands. Your *supposed* concern was that the latest VCVcat is incomplete. Well that earlier catalog must have been far more incomplete. So shouldn't we simply disregard that work ... and for that matter, any work that depends on any version of the VCVcat? For consistency? And wouldn't writing a paper to that effect and publishing it in a mainstream journal be a much more valuable use of your time than studying whether I'm a troll and making threads too long for your liking? :)
 
Quote:
Of course, spiral arms are not optically thick, not even in the x-ray band, as W. Kell has shown in a series of papers, and as this Chandra PR attests (work based on discovery of a hole by Lockman et al.).
.
Care to show us ANY evidence of ANY object behind the galaxy being seen through that region of the galaxy? Or are you just waving your hands?
.

Would you mind explaining why this is relevant?

You really can't see the relevancy, DRD? Hmmmm.

Surely the relevant question to ask is how transparent spiral arms of galaxies are, in the x-ray and visual wavebands?

Except this object really isn't very far out on a spiral arm. Just take another look at the picture:

spiralgalaxy.new.gif


What leads you to believe that region is "transparent"? Afterall, I linked peer reviewed scientific papers by astronomers who conclude it is not. Astronomers who concluded the quasar was almost certainly on this side of NGC 7319 and part of their reasoning was the likely density of obscuring matter. For example, http://www.journals.uchicago.edu/doi/full/10.1086/426886 states "there are no signs of background objects showing through the disk in our HST picture of the inner regions of NGC 7319". Why would this quasar just coincidentally be the only one, DRD?
 
Which five quasars, BAC? The NGC 3516 paper discusses six.

Wrong. The paper I cited on NGC 3516 (http://www.journals.uchicago.edu/doi/abs/10.1086/305779 ) talks about 5 quasars along the minor axis. The object at z = 0.089 is identified this way: "there is a very strong X-ray source that is listed as having a Seyfert spectrum (Veron-Cetty & Veron 1996) with redshift z = 0.089 (about 10 times the redshift of NGC 3516). Optically it is a compact, semistellar object. With its strong X-ray and radio properties, it is closely allied to BL Lac objects and therefore to the transition between quasars and objects with increasing components of stellar populations."

Except that two (of six) quasars do not match the predicted z within 0.1

True, but I was trying to simplify the problem by balancing the fact that 3 (of the five) fall into intervals less than 0.10 in width. But since you apparently aren't satisfied with my simplification, let's take another look at the whole problem.

Let's start by matching each observation with it's corresponding Karlsson value: (0.33,0.30) (0.69,0.60) (0.93,0.96) (1.40,1.41) (2.10,1.96). That means the distance from the Karlsson value in each case is 0.03, 0.09, 0.03, 0.01, 0.14 , respectively. To compare with the 0.10 discretization I used in my calculation, we must double those values: 0.06, 0.18, 0.06, 0.02, 0.28 . Note that 3 (of five) fall within a 0.06 discretization but you are correct that 2 don't fall within a 0.10 interval.

Let's redo the calculation for just those three cases and see what we get, assuming again that the quasars randomly came from a population with an equal distribution of probability between 0 and 3.0. There are 50 possible values in that range given an increment of 0.06. Now looking at this again, I don't think I should have used the permutation formula in the previous calculation. This time let's just use the combination formula ... in other words, let's find the number of possible combinations of those those r values from n possible values.

The formula for that is n!/((n-r)!r!). Thus the probability of seeing those 3 values turn up is 1/(n!/((n-r)!r!). In this case, that works out to 1/(50*49*48/3*2) = 5.1 x 10-5.

And now let's factor in the unlikelihood that we'd find 2 more quasars near that galaxy that are unusually close to the Karlsson (K) values of 0.60, and 1.96. Surely a conservative estimate for that probability would be to simply find the chance of each specific number turning up given an increment appropriate for that case. For the 0.69 case, for example, where the increment is 0.18 (twice the 0.09) value, over the range 0 to 3.0, there are at least 16 increments. So, the probability of finding that number 1/16 = 0.06. For the 1.96 case, the increment needs to be 0.28 and there are 10 possible values. The probability is 1/10 = 0.10. And finding these z's should be relatively independent of finding the others, so the probabilities should simply multiply together to give a final combined probability.

Therefore, I assert that, to a first order, the probability from the 3 number sequence can be adjusted to account for the unlikelihood of the 2 other quasars by multiplying it by 0.06 * 0.1. And that results in a final combined probability of 5.1 x 10-5 * 0.06 * 0.10 = 3.06 x 10-7. So to a first order, it appears my initial assumption that an increment of 0.1 used for all of them would balance everything out off by a factor of about 10.

And lest you think the selection of z = 3.0 as the upper bound in my calculation is arbitrary, let me note that I found a mainstream source that said, based on the SDSS study, the number of quasars decreases by a factor of 40 to 50 from z = 2.5 to z = 6.0. Therefore, I think I am justified in using a range of z = 0 - 3.0 in my calculations for quasar z. I will agree that the density of quasars of different z is not uniform over the range. Several of the sources I found indicated that it climbs rather steeply from a value at z = 0 to z = 1.5 and then levels off through z = 3.0. I don't see an easy way to incorporate this fact into the calculation but I don't think it really makes much of a difference since the differences between the Karlsson values and the observed z don't appear to have much of a trend up or down over the range.

However, since you questioned my 0.10 simplification, I'm going to take another look at the rest of the calculation, starting with what I estimated was the total number of quasars in the sky. Recall that I estimated the total number of quasars that can be seen as 1,237,500 ... by multiplying the number of square degrees in the sky (41,250) by 30 quasars per square degree. But is 30 quasars per square degree really a reasonable value to use?

Here's a 2005 study http://www.iop.org/EJ/abstract/1538-3881/129/5/204 that indicates an average density of 8.25 deg-2 based on the SDSS survey then argues it should be corrected upward to 10.2 deg-2 to make it complete. And if you go to the SDSS website (http://www.sdss.jhu.edu/ ) you find they say the effort will observe 100,000 quasars over a 10,000 deg2 area. That also works out to about 10 quasars deg-2. So it looks like I used a number that was 3 times too large in my earlier calculation. In this revised calculation, I will only assume the average quasar density is 10 deg-2. That means the total number of quasars than can be seen from earth is around 410,000.

So now we come to the question of how those 410,000 quasars are distributed, or more precisely, how many galaxies with 5 or more quasars near them can be expected in the total population of galaxies that can be seen. Now recall that in my previous calculation I initially assumed that the all the quasars are located near galaxies and distributed 5 per galaxy until the number of quasars available is exhausted. That resulted in an estimate of 250,000 (~ 1,237,500 /5) galaxies with 5 quasars each. Doing that maximized the total number of galaxies assumed to have 5 quasars which was a conservative approach from the standpoint of not wanting to over estimate the improbability of the observation of NGC 3516.

But the truth is that most quasars do not lie close to galaxies at all (certainly not galaxies where we can discern any detail as is the case in all three examples of interest here) so that's why I later multiplied the calculated probability by 0.10 to account for the assumption that only 10% of quasars lie next to a galaxy. I still think that's probably a reasonable number. But for this calculation, I'm going to give your side the benefit of the doubt and assume that fully half of all quasars are near galaxies. That has to be very conservative. Wouldn't you agree. So now there are 205,000 in the population that we need to distribute amongst galaxies.

It's also apparent that most galaxies that have nearby quasars only have a few quasars ... not 5 or more. I didn't find any source actually quantifying this but we can observe that in Arp's catalog of anomalous quasar/galaxy associations, relatively few of the examples have 5 or more quasars in the field. Therefore, I think it's conservative to assume that only half the quasars are in groups of 5 or more near galaxies. You would agree, right? In fact, I think this is very conservative assumption, otherwise Arp's list of anomalous quasar/galaxy associations would have likely contained far more examples with large numbers of quasars. In any case, I'm going to reduce the number of quasars available to comprise the population of galaxies that have 5 quasars by half ... to 103,000. Now if you divide that number by 5, that means there are at most 20,600 galaxies visible that have 5 quasars in their vicinity.

Therefore where previously I effectively multiplied the probability of any given galaxy having 5 quasars with Karlsson redshifts by 25,000 (1,237,500 / 5 * 0.10), now I'm going to multiply the new probability calculated for NGC 3516 by only 20,600. Doing so produces a probability (for finding the 5 quasars with the specific z's near NGC 3516 amongst the total population of quasars/galaxy associations) of 3.06 x 10-7 * 20,600 = .0063 .

Now let's complete the calculation by again adding in the fact that all 5 objects are aligned rather narrowly along the minor axis. I'll just use the dart board example I used previously, where I found that the probability of throwing 5 darts in a row that land within a 15 degree zone extending from opposite sides of the center of the dart board as being 3.9 x 10-6 per galaxy. And again, we have to multiply by the number of galaxies with 5 quasars that can be aligned. With only 20,600 such cases possible (conservatively), the probability of finding 5 quasars aligned along the minor axis is therefore 3.9 x 10-6 * 20,600 = 0.08 which makes the total likelihood of encountering this one case if one carefully studied the entire quasar population equal to 0.08 * 0.0063 = ~0.0005 .

That's a very small probability. Yet Arp found such a case after looking, not at all the galaxies that have quasars near them, but by looking at only a tiny fraction of those galaxies. Which makes his observation even more improbable. Perhaps significantly so.

And he not only found that case, he found two others that have large numbers of quasars with values close to the Karlsson values aligned with the minor axis of galaxies. Recall that NGC 5985 had 5 that are lined up along its minor axis with redshifts of 2.13, 1.97, 0.59, 0.81 and 0.35. The corresponding delta to the nearest Karlsson values are 0.03, 0.01, 0.01, 0.15, 0.05. Let's ignore the 0.81 and 0.35 values for the moment and find the probability of encountering the first three values, on a combinatorial basis. With an increment equal to twice the largest delta (i.e., 0.06), that probability is 1/((50 * 49 * 48)/(3*2*1)) = 1/19600 = 5.1 * 10-5.

As in the other case, we still have to add in the effect of the other data two points. Following the same approach as before, the probability of seeing the 0.35 value with an increment of 0.1 is 1/30 = 0.033. The probability of seeing the 0.81 data point with a increment of 0.30 is 1/10 = 0.1.

Therefore, the combined probability for NGC 5985 is 5.1 * 10-5 * 0.033 * 0.1 = 1.683 x 10-7. Accounting for the actual number of quasars that might be seen near galaxies in groups of 5 and the fact that all these objects are aligned with the minor axis gives a final probability of 1.683 x 10-7 * 20,600 * 0.08 = ~0.0003 .

That's another very small probability. And finding two such improbable associations when Arp didn't look at all that many galaxies/quasars in order to find these cases, is makes this even more improbable.

Any way you look at it, DRD, this finding does not bode well for the theory that quasar redshifts are not quantized and have nothing to do with the galaxies that they are near. And I draw your attention to the use of Bayes' Theorem that I outlined in my earlier post to David (post #151).

I can update that case for the new probabilities calculated above as follows.

Suppose apriori we are really sure that the mainstream theory about quasars and redshift is correct. Let's say Pr0(A) = 0.999, leaving just a little room for doubt. That means Pr0(B) = 0.001. Fair enough?

Next, we "measure" that sequence of 5 redshift values from NGC 3516 that are all aligned with the minor axis of the galaxy. And based on the calculation I did above, the probability of that sequence of values and alignment occurring under the assumption that hypothesis A is correct (PA(xi)) is calculated to be no better than 0.0005. At the same time, we can say that PB(xi) = 0.9995.

Now let's compute Pr1(A) and Pr1(B).

Pr1(A) = (0.999 * 0.0005) / (0.999 * 0.0005 + 0.001 * 0.9995) = 0.33

Pr1(B) = 0.67

In other words, based on that single observation, the probability that your hypothesis is correct has dropped from 99.9% to 33% and the probability that the quasars' redshifts and positions aren't just a matter of random chance has risen from 0.1% to 67%.

That theorem shows that finding these cases significantly reduces the probability that the mainstream hypothesis about quasars is correct. At least enough that it would behoove the mainstream to take a closer look rather than just try to dismiss this out of hand as they have done, and you and David are now trying to do. :D

and the range you chose is both arbitrary and too large (the highest peak you can consider is 2.1 ... otherwise you have to consider that two other predicted peaks in the range [0,3] were not observed

That's an interesting comment but I don't think it's correct because the theory has it that z is a function of the age of the quasar. There is no requirement in any given case that the galaxy have been producing quasars the entire time, including up till recently. There is therefore no requirement that there be quasars corresponding to each Karlsson value that is possible. Values can be skipped because for some reason the galaxy stopped producing quasars for a time. Or there may be more than one value at a given z because the galaxy was producing more for a time. Or there may not be any high ones because the galaxy stopped producing them at some point. Or perhaps we don't see any higher z quasars because they tend to be much closer to the site where they were produced and are therefore lost in the glare or opacity of the parent galaxy.

I have no idea how you got from 'there are a possible 1,237,500 quasars' to 'there could be at most 250,000 groups of 5 located next to 250,000 different galaxies'; would you mind explaining please?

I think the logic was clear enough in what I wrote earlier but see the revised calculation above. I tried to make it even clearer.

Based on Arpian ideas, what do you predict the redshifts of those 'newly discovered' quasars (and galaxies) will be BeAChooser? Where will they be, in relation to NGC 3516?

Obviously, I would predict they'd tend to be near Karlsson values. Do you have some data to suggest they are not? If not, I don't think this concern has much merit at all and just lengthens the thread further. ;)

And what would you predict for the next ten years, and the ten years after that, and ...?

There is apparently a limit to the total number of quasars. The methodology used in SDSS was designed to produce a relatively complete list of the surveyed region and one of the papers I found concluded that it had succeeded ... with well above 90% completeness in that region. So I don't anticipate new observations that will increase the estimated total quasar count much higher than it already is ... provided the regions that were already surveyed are representative of the whole. Sure, individual quasars will be found in those regions of the sky that weren't previously surveyed but that shouldn't increase total quasar counts. :)

And for the record, I have no idea whether NGC 3516 lies in an already surveyed region or not. Do you? If so, then by all means tell us the latest data. No need to be coy. That can only serve to make the thread longer and we know how you dislike that. :)
 
Originally Posted by BeAChooser
Look at it this way, if you have a field of walnut shells with one pea under one of the shells there is a certain probability that you will find a pea if you lift a shell. If that shell contains the pea, do you think the probability of finding a pea if you lift another shell is the same? Apparently so.

This example is not relevant to the issue at hand, in this case it was stated by you that there is one pea under one shell.

So I don’t see how that is analogous.

And therein lies part of your problem, David. But I lack the interest to correct your misunderstanding of the problem. :)

But lets us say that there is really huge field of shells like a million by a million matrix. And we don’t actually know how many peas are out there.

But in this case, David, we do know the number of peas ... or at least have a good estimate. And the fact that you still don't see that is another reason you've failed to understand the nature of this problem. :)

have been trained in sampling theory, practiced sampling theory and read a lot of publications regarding sampling theory and research articles using sampling theory.

Don't tell us you work for a pollster? :jaw-dropp

But Bayesian statitstics just blow my mind. What I have read about them does not seem to indicate they have much use except when the data involves very limited samples.

What do you think we have here, David, but a case with very limited samples of specific types of data. :) Just so you know, Bayesian methods are used all the time with rare events.

In all the research I have been involved in and read and been trained in, a posteriori statistics are just ruled out of hand and never used.

Well David, I guess that just proves you're not an engineer. :p

In what areas do you feel that the use of a posteriori statistics is a viable way of analyzing data?

Well obviously, I think the case at hand. ;)
 
WARNING! This is a gratuitous bump!

In case BeAChooser is still with us, and reading this thread: you have not been forgotten!

I, DeiRenDopa, am patiently waiting for you to return and answer the (many) open questions about the stuff that you posted here (no doubt there are others also waiting ...).

Don't worry, DeiRenDopa, I didn't forget you. But some of us have other interests beside studying long threads and trolls. :)
 
Help a noob out.............
.
"An Internet troll, or simply troll in Internet slang, is someone who posts controversial and usually irrelevant or off-topic messages in an online community, such as an online discussion forum, with the intention of baiting other users into an emotional response or to generally disrupt normal on-topic discussion." (source).

As BAC has returned, and apparently done a lot of research to address questions posed and points raised, I think it's safe to say that he's not a troll ... in this thread.

The fact that he's not (apparently, so far) bothered to do the same in at least two other JREF forum (Science etc section) threads, lends weight to the hypothesis that his posting behaviour in this thread is an abberation ...
 
DeiRenDopa said:
(some parts omitted)
I'm still not at all clear on what you're trying to say here; would you mind clarifying please? Specifically, I am not aware of any "theory that Arp, Narlikar, et al have proposed" which quantitatively accounts for "high redshift x-ray emitting object near galaxies [...] improbably quantized and distributed in a [specific] pattern".

Where is that theory published? Where are the specific, quantitative behaviours explicitly derived from that theory?
I suggest you look up all the papers and books that Narlikar and his associates have published. There seem to be dozens. :)
.
Thanks, and welcome back.

To be honest, I've read a great many of them, these past years, and I don't recall any which address the specific question I asked you.

So if you have a specific paper, or papers, that addresses my question - which is, after all, based solely on (rather outrageous) claims that you (not Arp) made - I think it both safe and prudent to provisionally conclude that there is no such theory.
.
And I found this recently. It might be of some interest to you:

http://arxiv.org/pdf/physics/0608164 "A Proposed Mechanism for the Intrinsic Redshift and its Preferred Values Purportedly Found in Quasars Based on the Local-Ether Theory Ching-Chuan Su Departmentof Electrical Engineering National Tsinghua University Hsinchu, Taiwan, August 2006, Abstract – Quasars of high redshift may be ejected from a nearby active galaxy of low redshift. This physical association then leads to the suggestion that the redshifts of quasars are not really an indication of their distances. In this investigation, it is argued that the high redshift can be due to the gravitational redshift as an intrinsic redshift. Based on the proposed local-ether theory, this intrinsic redshift is determined solely by the gravitational potential associated specifically with the celestial object in which the emitting sources are placed. During the process with which quasars evolve into ordinary galaxies, the fragmentation of quasars and the formation of stars occur and hence the masses of quasars decrease. Thus their gravitational potentials and hence redshifts become smaller and smaller. This is in accord with the aging of redshift during the evolution process. In some observations, the redshifts of quasars have been found to follow the Karlsson formula to exhibit a series of preferred peaks in their distributions. Based on the quasar fragmentation and the local-ether theory, a new formula is presented to interpret the preferred peaks quantitatively." :)
.
Thanks; that should be a fun read.
.
Oh ... you doing a study on that? Out of curiosity, who is funding your study? Or is that just a personal interest of yours? Is there something about long threads that you don't like? Do they irritate you? Will you be publishing this study in some journal? :)
.
I'm pleased that you are interested! :)

It is entirely personal, I doubt that I'll be publishing it in any journal.

What fascinates me, in this regard, is why some threads, in the explicitly 'Science' sections of discussion forums such as this JREF forum, on astronomy (etc), are so long. I mean, naively I'd expect that once the context, scope, etc were cleared up (say, one page, max), and once the relevant observations and theory/theories were agreed (this means, of course, the key papers; again, maybe no more than two pages), then the discussion should take no more than a page or three, tops, to arrive at an agreement on where the key areas of disagreement are, and what the outline of a research project to test them would look like.

Clearly, this does not happen (sometimes)! :D

Why?
.
The last is, of course, just what a troll does, and I note that many JREF forum regulars have called you just that. If that's so, then it's an obvious conclusion - one reason why threads like this are so long is that people keep feeding the trolls.
So is that how you are going to escape explaining the improbabilities surrounding NGC 3516, NGC 5985 and NGC 3628? Label me a "troll" and walk away (so you don't feed the troll)? Maybe I'll do a study on people who use that adhominem as a means of avoiding the issues. :)
.

Nice one, BAC! :D

You've reminded me that I need to find what the standard terms are for various kinds of logical fallacies (such as the one that this part of your post comes very close to, it seems) ... did you not read the qualifier (that I bolded)? Did you read it and (deliberately?) choose to ignore it?
 
What about someone like me, who is marginally informed, and sounds off occasionally on both sides of an issue?

That doesn't make me a troll does it?

Perhaps just a wishy washy orc, doppleganger or lich or something equally dreadful.
 
The fact that he's not (apparently, so far) bothered to do the same in at least two other JREF forum (Science etc section) threads, lends weight to the hypothesis that his posting behaviour in this thread is an abberation ...

And specifically which threads are those, oh great troll seeker? Any chance those are threads where you posted long after they'd been inactive? It's almost as if you wanted to make them even longer. And I thought you didn't like long threads. Silly me. Well watch out, DRD ... you may have to list yourself as a reason threads are getting so long here at JREF. In that paper you are doubtless writing for some erudite journal. :D
 
This was a duplicate message!

This was a duplicate message!
 
Last edited:
And how exactly does that address NASA's use of the catalog on their website? I see no mention of the warning that it is not to be used for statistical analysis. Was NASA derelict in not reprinting it?
.
I can't speak for NASA, nor can I speak for whoever wrote that webpage.

However, if the intended audience was/is professional astronomers, then it isn't necessary ... if only because it's a convention in this branch of science (as far as I know) that one must carefully read the details of how a catalogue was compiled before making use of it ... and the only way to do that is to read the words the people who compiled it were at pains to write, in describing and introducing the catalogue.

Perhaps you work, as a professional, in a different branch of science, where such a convention does not apply?

Perhaps you believe that in doing primary research using a catalogue that it is unnecessary to actually read what those who compiled it have to say about it?

I would like you to have a go at answering these questions; to me they address a somewhat boring but nonetheless integral part of what the doing of science is about.
.
Well perhaps that depends on how you define "statistical purposes". Let me quote some excerpts that to me sound like statistical analysis using Veron-Cetty data:

"In addition to our color criteria, we find that the number of GALEX-USNO matches drops rapidly for R ? 19.6, and we therefore limit our catalog to R ? 19.5. Finally, by comparing the Galactic-latitude distribution of our QSO candidates to QSOs found in the Veron catalog (Veron-Cetty 2006), we find that we have very little sensitivity for |b| < 25 ? and therefore do not search for quasars below this limit. We believe this low sensitivity is due to heavy extinction in the FUV -band."
.
I fully agree; you've hit the nail squarely on the head! :cool:

In this particular case, it would seem that authors used VCVcat for an entirely satisfactory purpose (statistically speaking) ... unlike Bell.
.
"We match our candidates to the Veron QSOs and 2MASS point sources via the VizieR search engine, requiring that the distance from the matched source to the candidate be less than 5??."

"From the 2692 sources in XMMSL1 (Freyberg et al. 2006), we find 20 that match our QSO candidates. The total probability of these matches predict that ten should be genuine QSOs; ten of these candidates appear in Veron-Cetty (2006), with little information for most of the others. One of candidates identified as a QSO in Veron-Cetty (2006) (USNO 1275-07898737) is unusual; it has an SDSS spectrum but cannot be classified by the automated pipeline."
.
Ditto ... look at what the authors are using VCVcat for!
.
So you haven't read the papers but yet you can state that the use is ok for the purposes stated? :rolleyes: Again, I shall just quote some excerpts that certainly seem to indicate VCVcat data being used for statistical analysis.

From source #3:

"In Figure 1, we plot the position on the sky in galactic coordinates of both the UHECRs and the selected BL Lac objects. There are no positional coincidences between these two samples up to an angular bin greater than 5L Lac objects and any UHECR data set with 33 entries to be Poisson with a mean value of ?4.06. Taking the data at face value, this implies a 2 j deviation effect." From the caption of Figure 1 - "The stars stand for the 22 BL Lac objects from the 9th edition of the Veron-Cetty and Veron (2000) Catalogue of Quasars and Active Galactic Nuclei, with redshifts z > 0.1 or unknown, magnitudes m < 18 and radio flux at 6 GHz (F6 > 0.17 Jy)."

"In a series of recent papers, Tinyakov & Tkachev (2001, 2002, 2003) claim a correlation between the arrival directions of UHECRs and BL Lac objects, a subgroup of the QSO sample previously considered. Specifically, the BL Lac objects chosen were those identified in the (9th edition) Veron-Cetty & Veron (2000) Catalogue of Quasars and Active Galactic Nuclei, with redshifts z > 0.1 or unknown, magnitudes m < 18 and radio flux at 6 GHz (F6 > 0.17 Jy). Only 22 objects fulfill such restrictions. In this analysis, there is no buffer against contamination by mismeasured protons piled up at the GZK energy limit. The cosmic-ray sample of Tinyakov & Tkachev consists of 26 events measured by the Yakutsk experiment with energy greater than 10^^19.38 eV (Afanasiev et al. 1996) and 39 events measured by the AGASA experiment with energy greater than 10^^19.68 eV (Hayashida et al. 2000). The evidence supporting their claim is based on six events reported by the AGASA collaboration (all with average energy <10^^19.9 eV) and two events recorded with the Yakutsk experiment (both with average energy <10^^19.6 eV), which were found to be within 2

"On a similar track, Gorbunov et al. (2002) claimed that a set of gamma-ray–loud BL Lac objects can be selected by intersecting the EGRET and BL Lac object catalogs. The only requirement that Gorbunov et al. considered for a BL Lac object to be physically associated with an EGRET source is that the angular distance between the best estimated position of the pair does not exceed 2R95, where R95 is the 95% CL contour of the EGRET detection. Their claim was based on a positional correlation analysis (using the doubled size for EGRET sources) between the third EGRET catalog (3EG; Hartman et al. 1999) and the objects identified as BL Lac in the Veron-Cetty & Veron (2000) catalogue."

From source #5:

"We test previously reported correlations between UHECRs and subsets of BL Lacs. Note that we test the physical hypothesis of correlation with a particular class of objects at a given angular scale and above a given energy threshold, but the collections of candidate sources are not identical to those in the original reports, because the sky observed by the southern Pierre Auger Observatory is different, and has only a partial overlap. ... snip ...
Test A: 22 BL Lacs from the 9 edition of the catalog of quasars and active nuclei [10], with optical magnitude m < 18, redshift z > 0.1 or unknown, and 6 cm radio flux F6 > 0.17 Jy. ... snip ... Test B: 157 BL Lacs (76 in the f.o.v.) from the 10 th edition of [10] with m < 18. ... snip ... Test D: 204 confirmed BL Lacs (106 in the f.o.v.) from the 10 th edition of [10] with m < 18. Subclasses: a) 157 BL, b) 47 HP. ... snip ... The determination of the statistical significance with which our measurements exclude the hypothesis that the signal present in the HiRes data set (case D) is due to correlations with BL Lacs is a delicate issue. ... snip ... Reference [10] M.-P. Veron-Cetty and P. Veron. A catalogue of quasars and active nuclei. 9th edition: ESO Scientific Report No. 19, 2001; 10th edition: Astron. & Astrophys. 374:92, 2001; 12th edition: Astron. & Astrophys. 455:773, 2006."

That last is very clearly a statistical analysis based on Veron-Cetty data. How would you describe it if it's not?
.
Thanks for going to the trouble of checking up on these. :)

Unless I missed something, the "statistical purposes" in these papers does not require VCVcat to be complete.

In fact, if I may say so, you seem to be confusing two very different kinds of analyses - those that require VCVcat to be complete (for at least some subset of data therein), and those whose results merely depend upon some sources selected from VCVcat (the "statistical analyses" done are essentially independent of the completeness of data in VCVcat).

Would it help you to understand the distinction if we were to go through just one paper, in detail?
.
The warning regarding VCVcat 12th Edition was not that it's "crazily wrong" but potentially incomplete. And very clearly, the earlier editions of the catalog were far more incomplete than the one Bell used. So perhaps we should throw out as garbage ANY work that relied on any VCVcat edition? Right?
.
Completely wrong.

Take a made up example.

Suppose I want to study a dozen BL Lac objects, and my set up is such that they need to be within a certain range of galactic latitudes, in the northern sky, and brighter than 20 in the B band. It doesn't matter, for the purposes of my study, which dozen or so I study, just so long as they are honest-to-goodness BL Lacs. VCVcat is then, it would seem, an ideal place to find them! If there aren't 12 that meet my criteria, then I may be stuck (though I could look elsewhere for them, or work on a very expensive survey to try to find them); if there are more than 12, then I'm done.

Does that make sense?

If you generalise it, maybe look at some other branch of science, would it help to understand my point?

By the way, this is pretty basic stuff ... even in an undergrad honours project or an MSc one.
.
Curious, it works for me. Oh well.

One down ...

Excerpts:

"We present optical images of nearby 50 narrow-line Seyfert 1 galaxies (NLS1s) which cover all the NLS1s at ... snip ... known at the time of 2001. ... snip ... With these images, we made morphological classification by eye inspection and by quantitative method, and found a high bar frequency of the NLS1s in the optical band ... snip ... The sample size is 2.6 times larger than that of NLS1s (19) studied by Crenshaw et al. (2003). Most of the present sample were taken from “a catalogue of quasars and active galactic nuclei, 10th edition” (Veron-Cetty & Veron 2001) ... snip ... For each object, we calculate an optical continuum luminosity ... snip ... , a black hole mass ... snip ... , and an Eddington ratio, and the values are listed in Table 1. The optical continuum luminosity is derived from the B magnitude given by Veron-Cetty & Veron 2003)"

That looks to me like they were doing some form of statistical analysis based on Cetty-Veron data too ... but maybe you, being a trained astrophysicist will tell us otherwise. :)
.
Of course they were ... but that analysis did not require that the objects they found in VCVcat be complete, by some criterion or other (except, as VCVcat say, in terms of what anyone could find in the published literature to the date of cutoff for publication of VCVcat).
.
Excerpts:

"THE QUASAR SAMPLE The sample was selected from the quasar catalogue of Veron-Cetty & Veron (1993) and comprises two subsamples, both confined to the redshift range 0.29 < z < 0.43 ... snip ... These two samples allow us to explore an orthogonal direction in the optical luminosity - redshift plane, in contrast to our previous HST studies of quasar hosts (McLure et al. 1999; Kukula et al. 2001; Dunlop et al. 2003) which concentrated on quasars of comparably moderate luminosity (MV > ?25), but spanning a wide range in redshift out to z ? 2 (Fig.1)."

"Quasars in the current study. J2000 co-ordinates were obtained from the Digitised Sky Survey plates maintained by the Space Telescope Science Institute. Redshifts and apparent V magnitudes are from the quasar catalogue of Veron-Cetty & Veron (2000)"

Certainly looks like they did some form of statistical analysis. And yes, those refer to an earlier edition of VCVcat. But my comment about that stands. Your *supposed* concern was that the latest VCVcat is incomplete. Well that earlier catalog must have been far more incomplete. So shouldn't we simply disregard that work ... and for that matter, any work that depends on any version of the VCVcat? For consistency? And wouldn't writing a paper to that effect and publishing it in a mainstream journal be a much more valuable use of your time than studying whether I'm a troll and making threads too long for your liking? :)
.

Ditto.
 
.
Quote:
The last is, of course, just what a troll does, and I note that many JREF forum regulars have called you just that. If that's so, then it's an obvious conclusion - one reason why threads like this are so long is that people keep feeding the trolls.

... snip ...

You've reminded me that I need to find what the standard terms are for various kinds of logical fallacies (such as the one that this part of your post comes very close to, it seems) ... did you not read the qualifier (that I bolded)? Did you read it and (deliberately?) choose to ignore it?

Well if you're going to use grammer as a denial of your obvious insinuation, perhaps you could tell us what specifically you were referring to when you wrote "If that's so"? You see, the location of that phrase seems to refer to your statement that "I note that many JREF forum regulars have called you just that." So are you doubting what you claim you noted? You sound confused. :)
 
DeiRenDopa said:
Of course, spiral arms are not optically thick, not even in the x-ray band, as W. Kell has shown in a series of papers, and as this Chandra PR attests (work based on discovery of a hole by Lockman et al.).
.
Care to show us ANY evidence of ANY object behind the galaxy being seen through that region of the galaxy? Or are you just waving your hands?
.

Would you mind explaining why this is relevant?
You really can't see the relevancy, DRD? Hmmmm.
.
BeAChooser, I asked you once before about this, and I see that you've gone and done it again! :mad::mad:

The words "Care to show us ANY evidence of ANY object behind the galaxy being seen through that region of the galaxy? Or are you just waving your hands?" are what you wrote, not what I wrote!!

I'm going to ask you again, politely: please write down the question you want to ask, as clearly as you can. Please make sure that you quote - accurately - what you, and I, wrote in prior posts that lead up to your question. Please make sure you take care to explain, in some detail, just what it is you are asking.
.
Except this object really isn't very far out on a spiral arm. Just take another look at the picture:

[qimg]http://ucsdnews.ucsd.edu/graphics/images/2004/spiralgalaxy.new.gif[/qimg]

What leads you to believe that region is "transparent"? Afterall, I linked peer reviewed scientific papers by astronomers who conclude it is not. Astronomers who concluded the quasar was almost certainly on this side of NGC 7319 and part of their reasoning was the likely density of obscuring matter. For example, http://www.journals.uchicago.edu/doi/full/10.1086/426886 states "there are no signs of background objects showing through the disk in our HST picture of the inner regions of NGC 7319". Why would this quasar just coincidentally be the only one, DRD?
.
There are, in the sky, many, many bright galaxies.

There are also many which have large angular sizes, as measured by the area, in square arcseconds, within the 25 B mag isophote (to take just one example).

Many of these galaxies are spirals.

Bill Keel did an extensive study of the optical depth of the arms of (bright, big) spiral galaxies, using some pretty clever methods.

He concluded, in a series of papers on this topic, that it is rare to find any part of any arm of a spiral that has an optical depth of >1.

That is a general finding.

A corollary to this finding is that a claim that a particular part of a particular (bright, big) spiral galaxy does, in fact, have an optical depth >1 is a rather unusual one.

The paper you cite is about just one quasar and just one (bright, big) spiral galaxy.

I have already commented on some shortcomings of this paper, in terms of its a posterori approach, and I note that you seem to have confirmed that, in your view of how astronomy (etc) should be done, such an approach is OK.

Perhaps a counter example might help?

The Einstein Cross, or QSO 2237+0305, is a background quasar lensed by a foreground galaxy ... and it is 'seen' right through the densest part of ZW 2237+030. If you're interested, there are some 233 NED references to it, in published papers. Of course, you may conclude that this is not quite the counter-example you would accept; if so, good ... let's discuss it some more then.

By the way, the post of yours I'm quoting seems to rest, in large part, on your acceptance of what a very small number of astronomers wrote, in just one paper.

Do you mind if I ask you how you go about evaluating the thousands of papers, written by hundreds (or more) of other astronomers, who find that quasars are at distances implied by their redshifts (per the Hubble relationship)?
 
Wrong. The paper I cited on NGC 3516 (http://www.journals.uchicago.edu/doi/abs/10.1086/305779 ) talks about 5 quasars along the minor axis. The object at z = 0.089 is identified this way: "there is a very strong X-ray source that is listed as having a Seyfert spectrum (Veron-Cetty & Veron 1996) with redshift z = 0.089 (about 10 times the redshift of NGC 3516). Optically it is a compact, semistellar object. With its strong X-ray and radio properties, it is closely allied to BL Lac objects and therefore to the transition between quasars and objects with increasing components of stellar populations."



True, but I was trying to simplify the problem by balancing the fact that 3 (of the five) fall into intervals less than 0.10 in width. But since you apparently aren't satisfied with my simplification, let's take another look at the whole problem.

Let's start by matching each observation with it's corresponding Karlsson value: (0.33,0.30) (0.69,0.60) (0.93,0.96) (1.40,1.41) (2.10,1.96). That means the distance from the Karlsson value in each case is 0.03, 0.09, 0.03, 0.01, 0.14 , respectively. To compare with the 0.10 discretization I used in my calculation, we must double those values: 0.06, 0.18, 0.06, 0.02, 0.28 . Note that 3 (of five) fall within a 0.06 discretization but you are correct that 2 don't fall within a 0.10 interval.

Let's redo the calculation for just those three cases and see what we get, assuming again that the quasars randomly came from a population with an equal distribution of probability between 0 and 3.0. There are 50 possible values in that range given an increment of 0.06. Now looking at this again, I don't think I should have used the permutation formula in the previous calculation. This time let's just use the combination formula ... in other words, let's find the number of possible combinations of those those r values from n possible values.

The formula for that is n!/((n-r)!r!). Thus the probability of seeing those 3 values turn up is 1/(n!/((n-r)!r!). In this case, that works out to 1/(50*49*48/3*2) = 5.1 x 10-5.

And now let's factor in the unlikelihood that we'd find 2 more quasars near that galaxy that are unusually close to the Karlsson (K) values of 0.60, and 1.96. Surely a conservative estimate for that probability would be to simply find the chance of each specific number turning up given an increment appropriate for that case. For the 0.69 case, for example, where the increment is 0.18 (twice the 0.09) value, over the range 0 to 3.0, there are at least 16 increments. So, the probability of finding that number 1/16 = 0.06. For the 1.96 case, the increment needs to be 0.28 and there are 10 possible values. The probability is 1/10 = 0.10. And finding these z's should be relatively independent of finding the others, so the probabilities should simply multiply together to give a final combined probability.

Therefore, I assert that, to a first order, the probability from the 3 number sequence can be adjusted to account for the unlikelihood of the 2 other quasars by multiplying it by 0.06 * 0.1. And that results in a final combined probability of 5.1 x 10-5 * 0.06 * 0.10 = 3.06 x 10-7. So to a first order, it appears my initial assumption that an increment of 0.1 used for all of them would balance everything out off by a factor of about 10.

And lest you think the selection of z = 3.0 as the upper bound in my calculation is arbitrary, let me note that I found a mainstream source that said, based on the SDSS study, the number of quasars decreases by a factor of 40 to 50 from z = 2.5 to z = 6.0. Therefore, I think I am justified in using a range of z = 0 - 3.0 in my calculations for quasar z. I will agree that the density of quasars of different z is not uniform over the range. Several of the sources I found indicated that it climbs rather steeply from a value at z = 0 to z = 1.5 and then levels off through z = 3.0. I don't see an easy way to incorporate this fact into the calculation but I don't think it really makes much of a difference since the differences between the Karlsson values and the observed z don't appear to have much of a trend up or down over the range.

However, since you questioned my 0.10 simplification, I'm going to take another look at the rest of the calculation, starting with what I estimated was the total number of quasars in the sky. Recall that I estimated the total number of quasars that can be seen as 1,237,500 ... by multiplying the number of square degrees in the sky (41,250) by 30 quasars per square degree. But is 30 quasars per square degree really a reasonable value to use?

Here's a 2005 study http://www.iop.org/EJ/abstract/1538-3881/129/5/204 that indicates an average density of 8.25 deg-2 based on the SDSS survey then argues it should be corrected upward to 10.2 deg-2 to make it complete. And if you go to the SDSS website (http://www.sdss.jhu.edu/ ) you find they say the effort will observe 100,000 quasars over a 10,000 deg2 area. That also works out to about 10 quasars deg-2. So it looks like I used a number that was 3 times too large in my earlier calculation. In this revised calculation, I will only assume the average quasar density is 10 deg-2. That means the total number of quasars than can be seen from earth is around 410,000.

So now we come to the question of how those 410,000 quasars are distributed, or more precisely, how many galaxies with 5 or more quasars near them can be expected in the total population of galaxies that can be seen. Now recall that in my previous calculation I initially assumed that the all the quasars are located near galaxies and distributed 5 per galaxy until the number of quasars available is exhausted. That resulted in an estimate of 250,000 (~ 1,237,500 /5) galaxies with 5 quasars each. Doing that maximized the total number of galaxies assumed to have 5 quasars which was a conservative approach from the standpoint of not wanting to over estimate the improbability of the observation of NGC 3516.

But the truth is that most quasars do not lie close to galaxies at all (certainly not galaxies where we can discern any detail as is the case in all three examples of interest here) so that's why I later multiplied the calculated probability by 0.10 to account for the assumption that only 10% of quasars lie next to a galaxy. I still think that's probably a reasonable number. But for this calculation, I'm going to give your side the benefit of the doubt and assume that fully half of all quasars are near galaxies. That has to be very conservative. Wouldn't you agree. So now there are 205,000 in the population that we need to distribute amongst galaxies.

It's also apparent that most galaxies that have nearby quasars only have a few quasars ... not 5 or more. I didn't find any source actually quantifying this but we can observe that in Arp's catalog of anomalous quasar/galaxy associations, relatively few of the examples have 5 or more quasars in the field. Therefore, I think it's conservative to assume that only half the quasars are in groups of 5 or more near galaxies. You would agree, right? In fact, I think this is very conservative assumption, otherwise Arp's list of anomalous quasar/galaxy associations would have likely contained far more examples with large numbers of quasars. In any case, I'm going to reduce the number of quasars available to comprise the population of galaxies that have 5 quasars by half ... to 103,000. Now if you divide that number by 5, that means there are at most 20,600 galaxies visible that have 5 quasars in their vicinity.

Therefore where previously I effectively multiplied the probability of any given galaxy having 5 quasars with Karlsson redshifts by 25,000 (1,237,500 / 5 * 0.10), now I'm going to multiply the new probability calculated for NGC 3516 by only 20,600. Doing so produces a probability (for finding the 5 quasars with the specific z's near NGC 3516 amongst the total population of quasars/galaxy associations) of 3.06 x 10-7 * 20,600 = .0063 .

Now let's complete the calculation by again adding in the fact that all 5 objects are aligned rather narrowly along the minor axis. I'll just use the dart board example I used previously, where I found that the probability of throwing 5 darts in a row that land within a 15 degree zone extending from opposite sides of the center of the dart board as being 3.9 x 10-6 per galaxy. And again, we have to multiply by the number of galaxies with 5 quasars that can be aligned. With only 20,600 such cases possible (conservatively), the probability of finding 5 quasars aligned along the minor axis is therefore 3.9 x 10-6 * 20,600 = 0.08 which makes the total likelihood of encountering this one case if one carefully studied the entire quasar population equal to 0.08 * 0.0063 = ~0.0005 .

That's a very small probability. Yet Arp found such a case after looking, not at all the galaxies that have quasars near them, but by looking at only a tiny fraction of those galaxies. Which makes his observation even more improbable. Perhaps significantly so.

And he not only found that case, he found two others that have large numbers of quasars with values close to the Karlsson values aligned with the minor axis of galaxies. Recall that NGC 5985 had 5 that are lined up along its minor axis with redshifts of 2.13, 1.97, 0.59, 0.81 and 0.35. The corresponding delta to the nearest Karlsson values are 0.03, 0.01, 0.01, 0.15, 0.05. Let's ignore the 0.81 and 0.35 values for the moment and find the probability of encountering the first three values, on a combinatorial basis. With an increment equal to twice the largest delta (i.e., 0.06), that probability is 1/((50 * 49 * 48)/(3*2*1)) = 1/19600 = 5.1 * 10-5.

As in the other case, we still have to add in the effect of the other data two points. Following the same approach as before, the probability of seeing the 0.35 value with an increment of 0.1 is 1/30 = 0.033. The probability of seeing the 0.81 data point with a increment of 0.30 is 1/10 = 0.1.

Therefore, the combined probability for NGC 5985 is 5.1 * 10-5 * 0.033 * 0.1 = 1.683 x 10-7. Accounting for the actual number of quasars that might be seen near galaxies in groups of 5 and the fact that all these objects are aligned with the minor axis gives a final probability of 1.683 x 10-7 * 20,600 * 0.08 = ~0.0003 .

That's another very small probability. And finding two such improbable associations when Arp didn't look at all that many galaxies/quasars in order to find these cases, is makes this even more improbable.

Any way you look at it, DRD, this finding does not bode well for the theory that quasar redshifts are not quantized and have nothing to do with the galaxies that they are near. And I draw your attention to the use of Bayes' Theorem that I outlined in my earlier post to David (post #151).

I can update that case for the new probabilities calculated above as follows.

Suppose apriori we are really sure that the mainstream theory about quasars and redshift is correct. Let's say Pr0(A) = 0.999, leaving just a little room for doubt. That means Pr0(B) = 0.001. Fair enough?

Next, we "measure" that sequence of 5 redshift values from NGC 3516 that are all aligned with the minor axis of the galaxy. And based on the calculation I did above, the probability of that sequence of values and alignment occurring under the assumption that hypothesis A is correct (PA(xi)) is calculated to be no better than 0.0005. At the same time, we can say that PB(xi) = 0.9995.

Now let's compute Pr1(A) and Pr1(B).

Pr1(A) = (0.999 * 0.0005) / (0.999 * 0.0005 + 0.001 * 0.9995) = 0.33

Pr1(B) = 0.67

In other words, based on that single observation, the probability that your hypothesis is correct has dropped from 99.9% to 33% and the probability that the quasars' redshifts and positions aren't just a matter of random chance has risen from 0.1% to 67%.

That theorem shows that finding these cases significantly reduces the probability that the mainstream hypothesis about quasars is correct. At least enough that it would behoove the mainstream to take a closer look rather than just try to dismiss this out of hand as they have done, and you and David are now trying to do. :D



That's an interesting comment but I don't think it's correct because the theory has it that z is a function of the age of the quasar. There is no requirement in any given case that the galaxy have been producing quasars the entire time, including up till recently. There is therefore no requirement that there be quasars corresponding to each Karlsson value that is possible. Values can be skipped because for some reason the galaxy stopped producing quasars for a time. Or there may be more than one value at a given z because the galaxy was producing more for a time. Or there may not be any high ones because the galaxy stopped producing them at some point. Or perhaps we don't see any higher z quasars because they tend to be much closer to the site where they were produced and are therefore lost in the glare or opacity of the parent galaxy.



I think the logic was clear enough in what I wrote earlier but see the revised calculation above. I tried to make it even clearer.



Obviously, I would predict they'd tend to be near Karlsson values. Do you have some data to suggest they are not? If not, I don't think this concern has much merit at all and just lengthens the thread further. ;)



There is apparently a limit to the total number of quasars. The methodology used in SDSS was designed to produce a relatively complete list of the surveyed region and one of the papers I found concluded that it had succeeded ... with well above 90% completeness in that region. So I don't anticipate new observations that will increase the estimated total quasar count much higher than it already is ... provided the regions that were already surveyed are representative of the whole. Sure, individual quasars will be found in those regions of the sky that weren't previously surveyed but that shouldn't increase total quasar counts. :)

And for the record, I have no idea whether NGC 3516 lies in an already surveyed region or not. Do you? If so, then by all means tell us the latest data. No need to be coy. That can only serve to make the thread longer and we know how you dislike that. :)
.

I shall return to this, long, post later.

For now, I merely note that Chu et al. seem to be hedging their bets re the object with a z of 0.089 ... "When we consider these objects together ... there is a good correlation", "Just the chance that the above six objects could accidentally lie within ...", "An especially significant result for these six objects is their specific redshift values.", etc.

To the extent that it bolsters their case for the Karlsson formula (etc), they are happy to include it.

Also, one of the five quasars had been found before Chu et al. set to work (and before Arp did too, per "1997a"); how do you count that? How do you incorporate that fact into your statistical analysis?
 
What about someone like me, who is marginally informed, and sounds off occasionally on both sides of an issue?

That doesn't make me a troll does it?

Perhaps just a wishy washy orc, doppleganger or lich or something equally dreadful.
.
Per the definition, most certainly not!!

You ask questions, you ask for clarification, you do not (often) take what others' write out of context (or make up stuff entirely, or mis-attribute it), you do not seem to go out of your way to press others' hot buttons, you seem to be careful in the way you write, you are happy (apparently) to re-phrase something when it is clear that you have not been well-understood, etc, etc, etc.
 
And specifically which threads are those, oh great troll seeker? Any chance those are threads where you posted long after they'd been inactive? It's almost as if you wanted to make them even longer. And I thought you didn't like long threads. Silly me. Well watch out, DRD ... you may have to list yourself as a reason threads are getting so long here at JREF. In that paper you are doubtless writing for some erudite journal. :D
.

Indeed, one does fit the first part of your description: Another Problem With Big Bang? I brought it back to life because it seems to me that it's directly relevant to this thread, and to the other one I had in mind: Something new under the sun.

One particularly fascinating thing - to me anyway - is how veherment you railed against 'magic' in one thread, yet how equally vehermently (so it seems to me) you defend something that looks exactly like the 'magic' you so dislike ('intrinsic redshift')!
 
Well if you're going to use grammer as a denial of your obvious insinuation, perhaps you could tell us what specifically you were referring to when you wrote "If that's so"? You see, the location of that phrase seems to refer to your statement that "I note that many JREF forum regulars have called you just that." So are you doubting what you claim you noted? You sound confused. :)
.
If you are, indeed, a troll, then one reason why some threads are so long is that people keep feeding trolls.

I do not doubt that other people in the JREF forum have called you a troll (I checked to make sure of it, before I wrote that post).

What I was unsure of, at the time, was whether you were, indeed, a troll.

If you think what I write is not clear, may I ask you to do me the courtesy of asking me to clarify it?
 
Off Topic: Hi BeAChooser. You seem to be ignoring the question that I asked in other threads so I thought that I would ask here. The posting is here .
The question is: Next gnome for the emprical observation of dark matter?
 
Dark Matter Matters

The question is: Next gnome for the emprical observation of dark matter?

Is the sticking point here the empirical evidence for dark matter, or the mass fraction of said dark matter?

Not to mention the type of dark matter.

If I recall, the Quasi-Steady-State Cosmology model recognizes that there must be some baryonic dark matter existing, everywhere.

Don't QSSC or Plasma Cosmology adherents have a problem with the hugh mass fractions of the will-o-wisp non-baryonic CDM that standard cosmology hangs it's hat on?

I think even TeVeS and MOND predictions are for sizable mass fractions of dark matter in clusters like the Bullet. Just not the large quantities of the non-baryonic stuff.
 

Back
Top Bottom