Psi in the Ganzfeld

Ersby

Fortean
Joined
Sep 12, 2001
Messages
1,881
Split from here:

http://www.internationalskeptics.com/forums/showthread.php?t=70319&page=6

Brief recap: Rodney is quoting from Dean Radin's book The Entangled Mind, specifically those section regarding the ganzfeld experiments, and how Radin's meta-analysis demonstrates an effect far in excess of what you'd expect by chance.

Radin, The Entangled Mind,
"From 1974 through 2004 a total of 88 ganzfeld experiments reporting 1,008 hits in 3,145 trials were conducted. The combined hit rate was 32% as compared to the chance-expected 25% (Figure 6-6). This 7% above-chance effect is associated with odds against chance of 29,000,000,000,000,000,000 (or 29 quintillion) to 1."


I maintain that Radin's meta-analysis is incomplete, and has no inclusion criteria so can't be taken seriously as an exhaustive and even-handed survey of the field. Effectively, Radin has taken the most famous results (which are the most favourable) and put them together. There are in fact twice as many ganzfeld experiments than he reports on (a little under 7,000). Rodney also added the quote from Radin:

"This excludes a few of the earliest ganzfeld studies that couldn't be evaluated with a hit vs. miss type of analysis."

Which more or less brings us up to the last post by Rodney in which he said:

So, Radin is clearly stating that he has not excluded any but the earliest studies, nor has he excluded any studies at all on the basis of low hit rates. Rather, he is stating that his exclusions were based only upon the earliest studies' protocols, which, he claims, were not amenable to a hit vs. miss type of analysis. If that's wrong, you need to provide specifics on the studies that have been excluded by Radin.

The missing experiments from Honorton's 1985 meta-analysis, and therefore Radin's are: Parker, 1975; Stanford, Neylon, 1975; Smith, Tremmel, Honorton, 1976; Terry, 1976; Terry, Tremmel, Kelly, Harper, Barker, 1976; Habel, 1976; Rogo, 1977; Dunne, Warnock, Bisaha, 1977; Parker, Miller, Beloff, 1977; Braud, Wood, 1977; Palmer, 1979; Keane, Wells, 1979; Stanford, 1979; Palmer, Whitson, Bogart, 1980; Roney-Dougal, 1981.

Details about these experiments can be found at http://www.skepticreport.com/psychicpowers/ganzfeld.htm (download Part 1a at the bottom of the page) In the meantime, the scoring methods used (with hit/miss in bold) were:

Parker, 1975, hit/miss 50% MCE
Stanford, Neylon,1975, ratings converted into z-scores *
Smith, Tremmel, Honorton, 1975, ten binary guesses per trial, MCE 50%
Terry, 1976, ten binary guesses per trial, MCE 50%
Terry, Tremmel, Harper, Barker, 1976, ten binary guesses per trial, MCE 50%
Habel, 1976, hit/miss 50% MCE
Rogo, 1977, ten binary guesses per trial, MCE 50%

Dunne, WArnock, Bisaha, 1977, ranking converted into z-scores
Parker, Miller, Belhoff, 1977, ranking *
Braud, Wood, 1977, ten binary guesses per trial, MCE 50%, **
Palmer, 1979, ratings converted into z-scores,
Keane, Wells, 1979, ratings,
Stanford, 1979, (don't know) *
Palmer, Whitson, Bogart, 1980, ratings converted into z-scores,
Roney-Dougal, 1981, ranking.

* those which did not report results
** had three different scoring systems

So I was wrong when I said that most of the experiments could be assessed in a hit/miss format. But if you know the results and the number of trials, it's not difficult at all to come up with an "equivalent" hit rate and work from that. Radin demonstrates enough statistcial know-how in his books to make me think it wouldn't be beyond him to reintroduce these experiments.

And as for those which did not report results numerically, they were described as being near chance, or in the case of Stanford, 1975, below chance but not significantly. From that a value can be estimated, thus avoiding any bias that could be introduced by reports on failed experiments which give only limited details regarding results.

Radin is also missing data from post-1985 but it is harder to establish what those are from the book.
 
Last edited:
[Homer Simpson]You can use fact to prove anything that's even remotely true.[/Homer Simpson]

Good post. :)
 
Oh boy, the ganzfeld experiemnts are so lacking in any controls in selection of targets and target words, then they use all sorts of silly-ness to make it look 'significant'. Apparently they don't understand statistics or protocol.
 
What does MCE in the OP mean? I tried googling MCE and statistics but didn't get an answer ... Thanks.
 
What does MCE in the OP mean? I tried googling MCE and statistics but didn't get an answer ... Thanks.

Sorry. After reading so much of this stuff I find myself using terms, forgetting that no one else may understand them.

MCE is Mean Chance Expectation. For example, the MCE of correctly guessing a target out of four choices is 25%.
 
I maintain that Radin's meta-analysis is incomplete, and has no inclusion criteria so can't be taken seriously as an exhaustive and even-handed survey of the field. Effectively, Radin has taken the most famous results (which are the most favourable) and put them together. There are in fact twice as many ganzfeld experiments than he reports on (a little under 7,000).
Thanks for all of the information. It will take me a while to read and digest. Preliminarily, however, if Radin has selectively cited 3145 favourable ganzfeld experiments out of a little under 7,000, that still would mean that the overall hit rate would be highly statistically significant, if Radin is correct that 1008 of the 3145 experiments that he cites produced hits. For example, assume that there actually are an additional 4000 experiments that overall produced chance results of 25% hits, or 1000 hits total. That would bring the total number of ganzfeld experiments to well over 7000 (7145). The total number of hits would be 2008 (the 1008 cited by Radin plus the assumed additional 1000). That would produce an overall hit rate of 2008 out of 7145, or 28.1%. While, to the layman, that might seem only narrowly above the chance rate of 25%, with that many experiments the true odds against chance would actually be 986 million to 1. In fact, using an on-line binomial calculator, if you put in 7145 for the value of "n", 2008 for the value of "k", and 0.25 for the value of "q" and click on calculate, you will obtain a "P" value for "2008 or more out of 7145" so small that it is not even calculated exactly, but simply shows as "<0.000001." See http://faculty.vassar.edu/lowry/binomialX.html

So, to invalidate the ganzfeld experiments, it appears that something more than selective inclusion must be found.
 
Hi Ersby. Well done for having the patience to research this for so many years. One day, I'll take the time to read through all parts of your article!

I maintain that Radin's meta-analysis is incomplete, and has no inclusion criteria

I take it Radin's meta-analysis paper doesn't include a section about inclusion criteria? If it doesn't, that's a poor show. Are you then basing your above statement on the fact that Radin didn't reply to your email request about this subject?

The missing experiments from Honorton's 1985 meta-analysis, .

Did Honorton say why he left out those experiments?

On an aside, I was reading through your introduction section. I was wondering about the "funnel" graph you plotted. The graph you've taken from the Richard Palmer study looks like its on a log scale on the x-axis. Could you confirm that because your ganfeld graph appears linear on the x-axis.
 
So, to invalidate the ganzfeld experiments, it appears that something more than selective inclusion must be found.

Let's go one step at a time. First I want to invalidate Radin's meta-analysis. That his work is selective is, I think, established and it cannot be taken seriously. Do you accept that now? Once we've agreed on that, then I'll move on to the database as a whole.
 
Last edited:
I take it Radin's meta-analysis paper doesn't include a section about inclusion criteria? If it doesn't, that's a poor show. Are you then basing your above statement on the fact that Radin didn't reply to your email request about this subject?

Mostly, yes. I wouldn't expect him to define his inclusion criteria in a popular science book, but I would expect him to be quite happy to explain it in an email or on his blog.

Did Honorton say why he left out those experiments?

It was because Hyman had an issue with a few of the early experments using more than one scoring method, and then reporting the most successful one. Honorton agreed that this was a problem. His solution was to limit his analysis to those experiments that used direct hit scoring of 25%, 20% and 16.5% MCE. Why he didn't include 50% MCE as well is a puzzle to me.

On an aside, I was reading through your introduction section. I was wondering about the "funnel" graph you plotted. The graph you've taken from the Richard Palmer study looks like its on a log scale on the x-axis. Could you confirm that because your ganfeld graph appears linear on the x-axis.
I don't know the details on Palmer's graph :o I just used it as a nice illustration of the shape you'd expect. The x-axis on my graph is linear, yes.
 
Last edited:
I don't know the details on Palmer's graph :o I just used it as a nice illustration of the shape you'd expect. The x-axis on my graph is linear, yes.

The Palmer graph x-axis certainly looks like a log scale from the spacing of intervals. But having just done a search on funnel plots in meta-analysis, it appears that both linear and log scales are used for both axes depending on the variable of interest.

This looks like a reputable summary. But it's way above my head!

http://biostatistics.oxfordjournals.org/cgi/reprint/1/3/247.pdf

I would be interested to see what kind of shape the ganzfeld plot turns out to be if you use a log scale on the x-axis though. If you still have the excel data, would you be able to do that relatively easily and let me know the result? Would be appreciated.
 
Let's go one step at a time. First I want to invalidate Radin's meta-analysis. That his work is selective is, I think, established and it cannot be taken seriously. Do you accept that now? Once we've agreed on that, then I'll move on to the database as a whole.
No, I don't accept your belief that Radin has deliberately excluded unfavourable ganzfeld studies to make the odds higher. What sense does it make for him to quote odds against of 29 quintillion to 1 when, even if you include every possible study, the odds against are still a billion or so to 1? Would anyone fail to be impressed by odds of a billion to 1 against? So, I have to believe that Radin sincerely believes that a number of studies should be excluded because they did not meet the proper protocol. Can you cite a specific case where there were two studies done with the same protocols and he included one in his meta-analysis, but excluded the other?
 
Sorry. After reading so much of this stuff I find myself using terms, forgetting that no one else may understand them.

MCE is Mean Chance Expectation. For example, the MCE of correctly guessing a target out of four choices is 25%.

Thanks Ersby.

I've read your introductions to each secion of your article, and plan on going through your actual article next.

As is probably no surprise, my lacking in statistical knowledge is getting in the way of understanding all of your points, but I think I still get the main gist.

davidsmith73 said:
I take it Radin's meta-analysis paper doesn't include a section about inclusion criteria? If it doesn't, that's a poor show. Are you then basing your above statement on the fact that Radin didn't reply to your email request about this subject?

Ersby said:
Mostly, yes. I wouldn't expect him to define his inclusion criteria in a popular science book, but I would expect him to be quite happy to explain it in an email or on his blog.

I agree, but I would also expect him to provide a footnote to the inclusion criteria in his popular science books so that anyone who wanted to refer to it could.

IMHO, Radin's refusal to answer this question strongly weakens his position. I suppose it's possible that he has answered this question privately to a "peer-reviewed" scientist, but I think this type of information should be public. Is it fair to say that in other fields, like the "hard sciences" or in research psychology, it is?
 
No, I don't accept your belief that Radin has deliberately excluded unfavourable ganzfeld studies to make the odds higher. What sense does it make for him to quote odds against of 29 quintillion to 1 when, even if you include every possible study, the odds against are still a billion or so to 1?

Where do you get the billion to 1 odds from? It just seems to be an assumption.

However, I think the crux of the matter is that if you average a number of good studies with a number of crocked up studies, you will get an overall result above chance expectation.

Again - this kind of metaanalysis rests on the assumption that all studies are good. Which we know they are not. If Radin wants to make a case, he'd have to explain how to make a study that will consistently give significant results - not just when some researchers do it, but always.
 
Where do you get the billion to 1 odds from? It just seems to be an assumption.
No, in post #6 here, I demonstrated that, even if 4000 additional ganzfeld trials were added with chance results to Radin's numbers to give a total of 7145 trials, the odds against the number of hits would be 986 million to 1. However, Ersby says that the total number of trials is actually less than 7000, which would push the odds against to well over a billion to 1.

However, I think the crux of the matter is that if you average a number of good studies with a number of crocked up studies, you will get an overall result above chance expectation.
If there were a sufficient number of crocked up studies, yes. But it remains to be demonstrated how many of those studies there were.

Again - this kind of metaanalysis rests on the assumption that all studies are good. Which we know they are not. If Radin wants to make a case, he'd have to explain how to make a study that will consistently give significant results - not just when some researchers do it, but always.
No. When human beings are involved, results will vary. During Michael Jordan's basketball career, he established way beyond chance that he was one of the all-time greats. But there were many games when he was, at best, average.
 
No, in post #6 here, I demonstrated that, even if 4000 additional ganzfeld trials were added with chance results to Radin's numbers to give a total of 7145 trials, the odds against the number of hits would be 986 million to 1. However, Ersby says that the total number of trials is actually less than 7000, which would push the odds against to well over a billion to 1.

Where do you get the 4000 number from?

Additionally, you assume that the extra results would be neutral. If we ascribe these missing results to the 'file drawer effect', which I think it is most reasonable to do, then we would expect them to have a below chance expectation on average.

No. When human beings are involved, results will vary. During Michael Jordan's basketball career, he established way beyond chance that he was one of the all-time greats. But there were many games when he was, at best, average.

That's why every study involves multiple trials, to give the subjects the benefit of the doubt that their PSI abilities may only be measurable on good days. But you seem to be suggesting that the researcher can do a "good" job, and end up with a positive study, or a "bad" job, and end up with chance or below chance expectations. Doesn't this seem a bit.. suspect?
 
Where do you get the 4000 number from?
In the Opening Post, Ersby stated: "There are in fact twice as many ganzfeld experiments than he [Radin] reports on (a little under 7,000)." So, since Radin reports on 3145 experiments, Ersby is suggesting that there about 3855 (7000-3145) experiments that Radin has not reported on. To be generous, I rounded the 3855 up to 4000, which would imply that there have actually been 7145 total ganzfeld experiments.

Additionally, you assume that the extra results would be neutral. If we ascribe these missing results to the 'file drawer effect', which I think it is most reasonable to do, then we would expect them to have a below chance expectation on average.
Why? A significantly below-chance result would actually be consistent with some psi believers' theory that certain researchers negatively impact the number of hits. So, I would think those experiments would get reported. On the other hand, to the extent that experiments are slightly positive or slightly negative, but not statistically significant, they could be subject to the 'file drawer effect'. In the absence of any evidence to the contrary, I think assuming exact chance results for unreported experiments is the fairest way to proceed.

That's why every study involves multiple trials, to give the subjects the benefit of the doubt that their PSI abilities may only be measurable on good days. But you seem to be suggesting that the researcher can do a "good" job, and end up with a positive study, or a "bad" job, and end up with chance or below chance expectations. Doesn't this seem a bit.. suspect?
I'm not assuming that, although again, some psi believers seem to believe that researchers who have negative attitudes toward psi can adversely affect results. But my point is that, since the results reported by Radin are so overwhelmingly statistically significant, it would take far more than 4000 unreported neutral experiments to refute Radin's general thesis.
 
No, I don't accept your belief that Radin has deliberately excluded unfavourable ganzfeld studies to make the odds higher.
Well, think about those pre-1985 experiment again. In The Conscious Universe, Radin said they were excluded because they did not report results. Now, in The Entangled Mind, he says it is because they do not use hit/miss scoring systems.

It appear to me that, having discovered that his initial reason for excluding these experiments was wrong, he does not re-introduce them, but rather comes up with a new reason to keep them excluded.

Can you cite a specific case where there were two studies done with the same protocols and he included one in his meta-analysis, but excluded the other?

Okay, right at the very beginning there's one of these dichotomies. Honorton and Parker both carried out the very first ganzfeld experiments seperately (though Honorton was first to be published). Both are fairly typical ganzfeld set-ups, but Honorton's experiment is included, while Parker's isn't. In fact, Parker's is more typical, since he used white noise as the audio stimulus, while Honorton used sounds of the sea. Parker's used 50% MCE and it is excluded on those grounds.

Another example, which is perhaps more telling, was what happened to then Cornell experiment. This replication of the PRL trials explored the difference between meditators and non-meditators, and it ran for 50 trials, scoring a 24% hit rate. Radin split the results according to meditators (36%) and non-meditators (12%) and then he simply excluded the non-meditators. He said he couldn't include data from subjects which were expected to do badly.

Well, quite apart from the fact that non-meditators aren't expected to do badly, he should've really applied that thinking to the whole database. Of course, this would leave him with a very small number of experiments. So he just took the non-meditators out of this one experiment.

Other examples (and it's sometimes hard to "reverse engineer" which experiments he's left out by looking at the data in his books) include Williams, Roe, Upchurch, Lawrence, 1994, which tested the sender/non-sender protocols and also looked at geo-magnetic activity. Both of these are pretty standard in ganzfeld so I can't see what grounds this experiment should be missing.

Lastly, back in 1978, Schmit and Stanford di an experiment to see whether the menstrual cycle would effect success in ganzfeld ESP tests. This is in Radin's meta-analysis (as part of Honorton's data) but the replication of this experiment (which got worse results) by Keane & Wells, 1979, is not included.

Whether or not this is all deliberate, I don't know. I suspect that, having found the results he wants by adding together some very high profile results, I think that Radin doesn't really want to explore the other experiments too closely.
 
Last edited:
Where do you get the 4000 number from?

Yes, he got it from me. My database of ganzfeld experiments is between 6,700 and 7,000 trials (from aorund 140 experiments), depending on which way you count the number of trials in certain experiments.
 
But my point is that, since the results reported by Radin are so overwhelmingly statistically significant, it would take far more than 4000 unreported neutral experiments to refute Radin's general thesis.
Just to make things clear - I'm saying that Radin's meta-analysis is wrong and can't be considered evidence of anything. I'm not trying to refute Radin's general thesis (that psi exists).

Like I said, one thing at a time.
 
Thanks for all of the information. It will take me a while to read and digest. Preliminarily, however, if Radin has selectively cited 3145 favourable ganzfeld experiments out of a little under 7,000, that still would mean that the overall hit rate would be highly statistically significant, if Radin is correct that 1008 of the 3145 experiments that he cites produced hits. For example, assume that there actually are an additional 4000 experiments that overall produced chance results of 25% hits, or 1000 hits total. That would bring the total number of ganzfeld experiments to well over 7000 (7145). The total number of hits would be 2008 (the 1008 cited by Radin plus the assumed additional 1000). That would produce an overall hit rate of 2008 out of 7145, or 28.1%. While, to the layman, that might seem only narrowly above the chance rate of 25%, with that many experiments the true odds against chance would actually be 986 million to 1. In fact, using an on-line binomial calculator, if you put in 7145 for the value of "n", 2008 for the value of "k", and 0.25 for the value of "q" and click on calculate, you will obtain a "P" value for "2008 or more out of 7145" so small that it is not even calculated exactly, but simply shows as "<0.000001." See http://faculty.vassar.edu/lowry/binomialX.html

So, to invalidate the ganzfeld experiments, it appears that something more than selective inclusion must be found.

The statistics would be nicer if there were better controls in place.

1. Each picture needs to have matching words that are assigned to it, this is a crucial control, otherwise the probability of a match is not twenty five percent. It is unknown.
2. Then the word distribution associated with each picture needs to be examined, at what rate do certain picture match certain words.
3.Then each set of pictures can be controlled for a very impotant variable called by me 'the random match rate'. each set should contain pictures that match different words. So no two pictures should be adjudged as having the same matching words. This has to be done, becuase fior the probability of a word match to be twenty five percent you have to do this.
4. then the overall distribution of words given by the reciever must be examined and the target pictures have to be matched against this random, or pseudo random match rate of just random words stated by the reciever. this would eliminate then the probabilty of a picture having a fifty per cent match to any random word selection.

These are the controls that need to be in place. Otherwise the 'target' picture might have a rate of 50% random match to any given reciever string, that would then raise the probability above 25%. especialy if all the pictures in a set have high match rates to random reciever strings.

For the ganfeld to be significant you have to prove that the given match rate is 25%, you can't assume that the given match rate to a randomly chosen reciever string is 25%. ceratin pictures could have a much higher or lower match rate to any given set of reciever word strings.

That is why the ganzfeld data is not valid at this point.
 
No, in post #6 here, I demonstrated that, even if 4000 additional ganzfeld trials were added with chance results to Radin's numbers to give a total of 7145 trials, the odds against the number of hits would be 986 million to 1. However, Ersby says that the total number of trials is actually less than 7000, which would push the odds against to well over a billion to 1.


that is only true because you assume without proving it that a given picture has only a twenty five percent chance of matching a random string of reciever words. First off each picture in a set can not have match words to any other picture in the set. Then you have to prove that the matxch rate for any given [picture is only twnty five percent to any given random reciever string of words.

If a picture has a high proability to match any random reciever string, then the odds are not twenty five percent.
 
that is only true because you assume without proving it that a given picture has only a twenty five percent chance of matching a random string of reciever words. First off each picture in a set can not have match words to any other picture in the set. Then you have to prove that the matxch rate for any given [picture is only twnty five percent to any given random reciever string of words.

If a picture has a high proability to match any random reciever string, then the odds are not twenty five percent.
I don't follow your logic. The way a classic ganzfeld experiment works is that a recipient, without knowing which of four pictures is the target picture focused on by a sender, selects one of the four. If the recipient chooses the picture focused on by the sender, that's a hit -- otherwise, it's a miss. The sender is not allowed to choose the picture focused on, but rather is required to focus on a picture selected at random. There are some variations on this procedure, such as using a panel of judges to determine, based on the recipient's comments, which of the four pictures is the best match, but again, the panel does not know which is the target picture.

Now, in some cases, the picture focused on may be more likely to be chosen by the recipient or the panel because, for example, it may have more vivid imagery than the other three pictures. However, the reverse is also true: In some cases, the picture focused on may be less likely to be chosen by the recipient or the panel because it may have less vivid imagery than the other three pictures. As long as selection of the picture is random and the recipient or panel does not which picture is the target, the expected hit rate should be 25%.
 
Why? A significantly below-chance result would actually be consistent with some psi believers' theory that certain researchers negatively impact the number of hits. So, I would think those experiments would get reported.

If only some psi believers believe this (which appears to be correct) then we should assume that only some studies with below-chance expectations would be published. Which again, suggests a below-chance average for unpublished results.

Additionally, for Radins selection (thanks for explaining the numbers again, I misunderstood), the choice is done by Radin, who apparently does not consider below-chance expectations to be significant.

I'm not assuming that, although again, some psi believers seem to believe that researchers who have negative attitudes toward psi can adversely affect results.

Your analogy with the basketball player suggests that you do assume this. Otherwise, why would researchers need several chances to get a positive result from an overall study?

I'm sure that researchers who have negative or even neutral attitudes towards psi can adversely affect results. First, they tend to impose stricter controls and be more careful with their statistics. Second, we can be quite sure that they will not fabricate results that give positive results (they may of course fabricate a non-significant result).

But my point is that, since the results reported by Radin are so overwhelmingly statistically significant, it would take far more than 4000 unreported neutral experiments to refute Radin's general thesis.

Which again assumes that all the positive experiments are good. Which is clearly not the case. This is why, again, we need Radin and friends to point out a procedure for making the experiment that consistently gives significantly positive results.
 
I would be interested to see what kind of shape the ganzfeld plot turns out to be if you use a log scale on the x-axis though. If you still have the excel data, would you be able to do that relatively easily and let me know the result? Would be appreciated.

I'm not sure what that would acheive. But as it turns out, I brought the wrong cd-rom into work this morning, so I can't do anything 'til after Xmas.
 
Another example, which is perhaps more telling, was what happened to then Cornell experiment. This replication of the PRL trials explored the difference between meditators and non-meditators, and it ran for 50 trials, scoring a 24% hit rate. Radin split the results according to meditators (36%) and non-meditators (12%) and then he simply excluded the non-meditators. He said he couldn't include data from subjects which were expected to do badly.

Well, quite apart from the fact that non-meditators aren't expected to do badly, he should've really applied that thinking to the whole database. Of course, this would leave him with a very small number of experiments. So he just took the non-meditators out of this one experiment.

Oh what a mess. Its things like this that shake my confidence in the ganzfeld meta-analysis. I had no idea that the Cornell experiments were split into two hit rates, just by reading the "conscious universe". Perhaps Radin had a good reason to do this, but from what you've said Ersby, I can't see how. Yes, non-meditators should also be getting above chance results, since much of the entire database are non-meditating trials.

However, I still believe that ganzfeld research provides promissng results. The Cornell experiments in themselves suggest a difference between altered states. Perhaps meta-analyses should concentrate on finding differences between these kinds of states of consciousness instead of trying to lump different kinds of experiments together.

Do you know of such a meta-analysis?
 
Honorton investigated the difference in his work at PRL, which found that meditators scored higher than non. Milton & Wiseman, in their m-a, found no such effect in the data they examined. Broughton also looked at the difference, but I can't remember what he found.

[shameless plug]If you search my articels for "meditator", that should point you to the data you want.[/shameless plug]

Bierman and Wezelman also examined the difference in scoring for people on drugs, but found no effect.
 
However, I still believe that ganzfeld research provides promissng results.

"No matter what happens, even if the research I have so far put so much trust in turns out to be bogus...I will still be a believer, because there is always some new crap I can hitch on to."
 
I don't follow your logic. The way a classic ganzfeld experiment works is that a recipient, without knowing which of four pictures is the target picture focused on by a sender, selects one of the four. If the recipient chooses the picture focused on by the sender, that's a hit -- otherwise, it's a miss. The sender is not allowed to choose the picture focused on, but rather is required to focus on a picture selected at random. There are some variations on this procedure, such as using a panel of judges to determine, based on the recipient's comments, which of the four pictures is the best match, but again, the panel does not know which is the target picture.

Now, in some cases, the picture focused on may be more likely to be chosen by the recipient or the panel because, for example, it may have more vivid imagery than the other three pictures. However, the reverse is also true: In some cases, the picture focused on may be less likely to be chosen by the recipient or the panel because it may have less vivid imagery than the other three pictures. As long as selection of the picture is random and the recipient or panel does not which picture is the target, the expected hit rate should be 25%.


Procedure varies, but in many experiments there is a word choice made by the reciever , they say some owrds and then decide which picture they think it matches.

In the protocol you are describing a similar effect would still apply. certain picture are more likely to be chosen or not chosen at random without any Ganzfeld experiment. To control for random picture selection you would have to match the pictures on a different scale.

Show a random set of four pictures to each reciever, tell them that someone else trying to 'send' them an image and ask them to chose the target. Do the same with senders have them choose a picture to 'send' without a reciever. Certain pictures will have higher chances of being chosen over time, certain pictures will have a lower probability of being chosen over time. Due to just appeal in the subject matter.

You can then match sets to provide that either there is an equal proability of a sender match or a receiver match.

Then you could say that there was a base probaility of 25% but it is very possible that certain pictures are more likely to be chosen by recievers over others and if those are the 'target' then that is going to be an effect other than psi.
 
Oh what a mess. Its things like this that shake my confidence in the ganzfeld meta-analysis. I had no idea that the Cornell experiments were split into two hit rates, just by reading the "conscious universe". Perhaps Radin had a good reason to do this, but from what you've said Ersby, I can't see how. Yes, non-meditators should also be getting above chance results, since much of the entire database are non-meditating trials.
I don't see the evidence that Radin has excluded non-meditators from the overall figures he cites in Entangled Minds; i.e., 3045 ganzfeld experiments with 1008 hits. If he did exclude them, that would be wrong, but I still come back to how far above chance 1008 hits in 3045 experiments is. Is there any meta-analysis that Ersby or anyone else has done of all experiments that shows the overall hit rate was not significantly above chance?
 
Rodney: Well, if he DID exclude the non-meditators, then this would be exactly the kind of selection I was talking about. Exclusion of below chance outcomes. So at any rate, I think your theory that any added data would only be at chance expectations seems very weak indeed.

In fact there seems to be three possible explanations here:
a) Data with below-chance expectations was excluded. Either by Radin himself, or due to a 'file drawer' effect.
b) Some above-chance studies are bad, raising the average. Either because of outright fraud, or due to flawed methodology.
c) The PSI effect exists.

So in the end, I repeat my mantra: Radin should stop focusing so much on averages, which will never prove anything, and instead concentrate on finding a test procedure that works reliably.
 
I don't see the evidence that Radin has excluded non-meditators from the overall figures he cites in Entangled Minds;

Why do you say that you cannot see it? Maybe I have not been clear enough. Let me try once more.

Bear in mind that the Entangled Mind meta-analysis is just an updated version of the m-a in the Conscious Universe. Okay?

So, go back to your copy of The Conscious Mind.

Look at figure 5.4 regarding ganzfeld experiments. See that Cornell is listed as having a hit rate of 36%. The only way Radin could come to that conclusion is if he deliberately excluded non-meditators from his meta-analysis for that experiment.

Okay?

Now, just to seal the arguement, let's hear from Radin himself:

Radin, "Should Ganzfeld Research Continue to Be Crucial in the Search for a Replicable Psi Effect? Part Ii. Edited Ganzfeld Debate", JoP, vol 6, 1999
"Bem's experiment was a differential ganzfeld study involving meditators (25 sessions) and nonmeditators (25 sessions). I did not include the nonmeditator data in my analysis because that group was predicted to not perform as well as the meditators (which is what happened), and I couldn't justify including in a proof-oriented meta-analysis a subset which was predicted to "not" perform."


I repeat: non-meditators are not predicted to do badly in ESP ganzfeld experiments.

Now, Rodney, whether you see it or not is no longer an issue for the ganzfeld experiments, more an issue for your ability to process data which does not conform to your world view.

Either way, please furnish us with data supporting your view, or admit that Radin's meta-analysis of ganzfeld is not evidence for ESP.

Is there any meta-analysis that Ersby or anyone else has done of all experiments that shows the overall hit rate was not significantly above chance?

Once again, I must insist that we deal with one topic at a time.

When you admit that Radin's meta-analyses is incomplete, we can move on to the database as a whole.

And if you do not admit that Radin's meta-analysis is incomplete, please give your evidence for doing so. I'm afraid that statements like "I have to believe that..." are not good enough.

I have given you plenty of data demonstrating beyond dount that Radin's meta-analysis is fatally flawed.

If you think otherwise, please explain why. With data.

Then we may continue.
 
Last edited:
Psi in the Ganzfeld, shoo, psi, shoo!
Psi in the Ganzfeld, shoo, psi, shoo,
Psi in the Ganzfeld, shoo, psi, shoo!
Skip to m'lou, my darling.
 
Why do you say that you cannot see it? Maybe I have not been clear enough. Let me try once more.

Bear in mind that the Entangled Mind meta-analysis is just an updated version of the m-a in the Conscious Universe. Okay?
Are you sure that Radin has not made any adjustments from the Conscious Universe?

So, go back to your copy of The Conscious Mind.

Look at figure 5.4 regarding ganzfeld experiments. See that Cornell is listed as having a hit rate of 36%. The only way Radin could come to that conclusion is if he deliberately excluded non-meditators from his meta-analysis for that experiment.

Okay?

Now, just to seal the arguement, let's hear from Radin himself:

Radin, "Should Ganzfeld Research Continue to Be Crucial in the Search for a Replicable Psi Effect? Part Ii. Edited Ganzfeld Debate", JoP, vol 6, 1999
"Bem's experiment was a differential ganzfeld study involving meditators (25 sessions) and nonmeditators (25 sessions). I did not include the nonmeditator data in my analysis because that group was predicted to not perform as well as the meditators (which is what happened), and I couldn't justify including in a proof-oriented meta-analysis a subset which was predicted to "not" perform."
I agree that doesn't seem justified, but I can't see why Radin would be so concerned with a paltry 25 trials, which has no significant bearing on his analysis.

I repeat: non-meditators are not predicted to do badly in ESP ganzfeld experiments.
Are you sure that Daryl Bem did not predict that? One of the commenters stated: "Daryl Bem's study is unpublished, so I cannot check whether he predicted nonmeditators not to perform at all or to perform less well than meditators--two entirely different things."

Now, Rodney, whether you see it or not is no longer an issue for the ganzfeld experiments, more an issue for your ability to process data which does not conform to your world view.
Ah yes, the objective skeptic versus the biased believer. :)

Either way, please furnish us with data supporting your view, or admit that Radin's meta-analysis of ganzfeld is not evidence for ESP.
1008 hits out of 3145 experiments with an expected hit rate of 25% is not evidence for ESP? And I've already demonstrated that adding in 4000 more experiments is highly unlikely to reduce Radin's findings to insignificance.

When you admit that Radin's meta-analyses is incomplete, we can move on to the database as a whole.
Okay, it's incomplete, but I'm still not persuaded that Radin has deliberately excluded experiments that produced insignificant or negative results solely to hype the hit rate. But let's move on because I'm interested in hearing what you think was flawed about the many successful ganzfeld experiments.
 
Are you sure that Radin has not made any adjustments from the Conscious Universe?

How could Radin "adjust" the Cornell experiments?

I agree that doesn't seem justified, but I can't see why Radin would be so concerned with a paltry 25 trials, which has no significant bearing on his analysis.

Then which 25 trials are significant? If we know that he selected some data erroneously, then why should we expect him not to have done the same in many other cases?

Are you sure that Daryl Bem did not predict that? One of the commenters stated: "Daryl Bem's study is unpublished, so I cannot check whether he predicted nonmeditators not to perform at all or to perform less well than meditators--two entirely different things."

But we know that Radin did not predict that. If he did, he should exclude all the other trials with non-meditators, too. Presumably that would be most of the data in the m-a.


1008 hits out of 3145 experiments with an expected hit rate of 25% is not evidence for ESP?

Not if the selection of experiments has been shown to be flawed.

And I've already demonstrated that adding in 4000 more experiments is highly unlikely to reduce Radin's findings to insignificance.

Again, only if you assume that they will be neutral. Based on the 25 experiments that we know he excluded, we know that this is not true.

Okay, it's incomplete, but I'm still not persuaded that Radin has deliberately excluded experiments that produced insignificant or negative results solely to hype the hit rate.

It's not necessary that he did it deliberately, only that he did it.

But let's move on because I'm interested in hearing what you think was flawed about the many successful ganzfeld experiments.

Which one of them? I would like to know which test setup you think will produce significant results. If you could tell us, we could move on to compare different studies using the same kind of setup, and from there it should be possible to draw such conclusions.
 
Ah yes, the objective skeptic versus the biased believer. :)
Well, look at it from my point of view - I don't want to be the one that brings all the data to the party. If you hold an opinion on something, the least you can do is explain why.

1008 hits out of 3145 experiments with an expected hit rate of 25% is not evidence for ESP?
As Merko said, with Radin's selection process (or, rather, lack of it), it is not evidence for ESP. If I wanted I could put together a meta-analysis as big as Radin's with a hit rate at chance. Would you consider that to be evidence against psi?

Okay, it's incomplete, but I'm still not persuaded that Radin has deliberately excluded experiments that produced insignificant or negative results solely to hype the hit rate.
Deliberate or not, I don't know. But that's what he did.

But let's move on because I'm interested in hearing what you think was flawed about the many successful ganzfeld experiments.
Not all of them were flawed, of course. There's about a dozen which have statistically significant scores and no obvious flaws. Do you want me to pick an experiment and we can discuss it?
 
As Merko said, with Radin's selection process (or, rather, lack of it), it is not evidence for ESP. If I wanted I could put together a meta-analysis as big as Radin's with a hit rate at chance. Would you consider that to be evidence against psi?
If it is logical. Feel free to do so.

Not all of them were flawed, of course. There's about a dozen which have statistically significant scores and no obvious flaws. Do you want me to pick an experiment and we can discuss it?
Yes.
 
My m-a, should I do it, will be every bit as logical as Radin's.

But it'll be after Christmas. See you in a few days.
 

Back
Top Bottom