Split from here:
http://www.internationalskeptics.com/forums/showthread.php?t=70319&page=6
Brief recap: Rodney is quoting from Dean Radin's book The Entangled Mind, specifically those section regarding the ganzfeld experiments, and how Radin's meta-analysis demonstrates an effect far in excess of what you'd expect by chance.
Radin, The Entangled Mind,
"From 1974 through 2004 a total of 88 ganzfeld experiments reporting 1,008 hits in 3,145 trials were conducted. The combined hit rate was 32% as compared to the chance-expected 25% (Figure 6-6). This 7% above-chance effect is associated with odds against chance of 29,000,000,000,000,000,000 (or 29 quintillion) to 1."
I maintain that Radin's meta-analysis is incomplete, and has no inclusion criteria so can't be taken seriously as an exhaustive and even-handed survey of the field. Effectively, Radin has taken the most famous results (which are the most favourable) and put them together. There are in fact twice as many ganzfeld experiments than he reports on (a little under 7,000). Rodney also added the quote from Radin:
"This excludes a few of the earliest ganzfeld studies that couldn't be evaluated with a hit vs. miss type of analysis."
Which more or less brings us up to the last post by Rodney in which he said:
The missing experiments from Honorton's 1985 meta-analysis, and therefore Radin's are: Parker, 1975; Stanford, Neylon, 1975; Smith, Tremmel, Honorton, 1976; Terry, 1976; Terry, Tremmel, Kelly, Harper, Barker, 1976; Habel, 1976; Rogo, 1977; Dunne, Warnock, Bisaha, 1977; Parker, Miller, Beloff, 1977; Braud, Wood, 1977; Palmer, 1979; Keane, Wells, 1979; Stanford, 1979; Palmer, Whitson, Bogart, 1980; Roney-Dougal, 1981.
Details about these experiments can be found at http://www.skepticreport.com/psychicpowers/ganzfeld.htm (download Part 1a at the bottom of the page) In the meantime, the scoring methods used (with hit/miss in bold) were:
Parker, 1975, hit/miss 50% MCE
Stanford, Neylon,1975, ratings converted into z-scores *
Smith, Tremmel, Honorton, 1975, ten binary guesses per trial, MCE 50%
Terry, 1976, ten binary guesses per trial, MCE 50%
Terry, Tremmel, Harper, Barker, 1976, ten binary guesses per trial, MCE 50%
Habel, 1976, hit/miss 50% MCE
Rogo, 1977, ten binary guesses per trial, MCE 50%
Dunne, WArnock, Bisaha, 1977, ranking converted into z-scores
Parker, Miller, Belhoff, 1977, ranking *
Braud, Wood, 1977, ten binary guesses per trial, MCE 50%, **
Palmer, 1979, ratings converted into z-scores,
Keane, Wells, 1979, ratings,
Stanford, 1979, (don't know) *
Palmer, Whitson, Bogart, 1980, ratings converted into z-scores,
Roney-Dougal, 1981, ranking.
* those which did not report results
** had three different scoring systems
So I was wrong when I said that most of the experiments could be assessed in a hit/miss format. But if you know the results and the number of trials, it's not difficult at all to come up with an "equivalent" hit rate and work from that. Radin demonstrates enough statistcial know-how in his books to make me think it wouldn't be beyond him to reintroduce these experiments.
And as for those which did not report results numerically, they were described as being near chance, or in the case of Stanford, 1975, below chance but not significantly. From that a value can be estimated, thus avoiding any bias that could be introduced by reports on failed experiments which give only limited details regarding results.
Radin is also missing data from post-1985 but it is harder to establish what those are from the book.
http://www.internationalskeptics.com/forums/showthread.php?t=70319&page=6
Brief recap: Rodney is quoting from Dean Radin's book The Entangled Mind, specifically those section regarding the ganzfeld experiments, and how Radin's meta-analysis demonstrates an effect far in excess of what you'd expect by chance.
Radin, The Entangled Mind,
"From 1974 through 2004 a total of 88 ganzfeld experiments reporting 1,008 hits in 3,145 trials were conducted. The combined hit rate was 32% as compared to the chance-expected 25% (Figure 6-6). This 7% above-chance effect is associated with odds against chance of 29,000,000,000,000,000,000 (or 29 quintillion) to 1."
I maintain that Radin's meta-analysis is incomplete, and has no inclusion criteria so can't be taken seriously as an exhaustive and even-handed survey of the field. Effectively, Radin has taken the most famous results (which are the most favourable) and put them together. There are in fact twice as many ganzfeld experiments than he reports on (a little under 7,000). Rodney also added the quote from Radin:
"This excludes a few of the earliest ganzfeld studies that couldn't be evaluated with a hit vs. miss type of analysis."
Which more or less brings us up to the last post by Rodney in which he said:
So, Radin is clearly stating that he has not excluded any but the earliest studies, nor has he excluded any studies at all on the basis of low hit rates. Rather, he is stating that his exclusions were based only upon the earliest studies' protocols, which, he claims, were not amenable to a hit vs. miss type of analysis. If that's wrong, you need to provide specifics on the studies that have been excluded by Radin.
The missing experiments from Honorton's 1985 meta-analysis, and therefore Radin's are: Parker, 1975; Stanford, Neylon, 1975; Smith, Tremmel, Honorton, 1976; Terry, 1976; Terry, Tremmel, Kelly, Harper, Barker, 1976; Habel, 1976; Rogo, 1977; Dunne, Warnock, Bisaha, 1977; Parker, Miller, Beloff, 1977; Braud, Wood, 1977; Palmer, 1979; Keane, Wells, 1979; Stanford, 1979; Palmer, Whitson, Bogart, 1980; Roney-Dougal, 1981.
Details about these experiments can be found at http://www.skepticreport.com/psychicpowers/ganzfeld.htm (download Part 1a at the bottom of the page) In the meantime, the scoring methods used (with hit/miss in bold) were:
Parker, 1975, hit/miss 50% MCE
Stanford, Neylon,1975, ratings converted into z-scores *
Smith, Tremmel, Honorton, 1975, ten binary guesses per trial, MCE 50%
Terry, 1976, ten binary guesses per trial, MCE 50%
Terry, Tremmel, Harper, Barker, 1976, ten binary guesses per trial, MCE 50%
Habel, 1976, hit/miss 50% MCE
Rogo, 1977, ten binary guesses per trial, MCE 50%
Dunne, WArnock, Bisaha, 1977, ranking converted into z-scores
Parker, Miller, Belhoff, 1977, ranking *
Braud, Wood, 1977, ten binary guesses per trial, MCE 50%, **
Palmer, 1979, ratings converted into z-scores,
Keane, Wells, 1979, ratings,
Stanford, 1979, (don't know) *
Palmer, Whitson, Bogart, 1980, ratings converted into z-scores,
Roney-Dougal, 1981, ranking.
* those which did not report results
** had three different scoring systems
So I was wrong when I said that most of the experiments could be assessed in a hit/miss format. But if you know the results and the number of trials, it's not difficult at all to come up with an "equivalent" hit rate and work from that. Radin demonstrates enough statistcial know-how in his books to make me think it wouldn't be beyond him to reintroduce these experiments.
And as for those which did not report results numerically, they were described as being near chance, or in the case of Stanford, 1975, below chance but not significantly. From that a value can be estimated, thus avoiding any bias that could be introduced by reports on failed experiments which give only limited details regarding results.
Radin is also missing data from post-1985 but it is harder to establish what those are from the book.
Last edited: