ISF Logo   IS Forum
Forum Index Register Members List Events Mark Forums Read Help

Go Back   International Skeptics Forum » General Topics » General Skepticism and The Paranormal
 


Welcome to the International Skeptics Forum, where we discuss skepticism, critical thinking, the paranormal and science in a friendly but lively way. You are currently viewing the forum as a guest, which means you are missing out on discussing matters that are of interest to you. Please consider registering so you can gain full use of the forum features and interact with other Members. Registration is simple, fast and free! Click here to register today.
Tags meta-analysis

Reply
Old 8th April 2009, 01:28 AM   #1
Ersby
Fortean
 
Join Date: Sep 2001
Posts: 1,872
The Ganzfeld - my findings to date

Posting this for comments/criticisms.

Well, it's been long enough so the other day I chucked out my previous calculations, went right back to the raw data, and dug around in Excel until I found the right formula so it could do the sums properly. I then used a piece of free meta-analysis software to crunch the numbers for me. This is what I got.

Number of experiments=65
Number of trials=2,854
Weighted average z-value=1.4571
Significance (one-tailed) P=0.07 (or 1 in 13)

In the instructions for this piece of software, it recommended transforming the data into r values, so I did that too.

Population effect size=0.02792 (in other words, very small)

Some people may have seen from the filenames in the thumbnail below that this is the homogenous version of the database. The first time I ran the data through the program I got a statistically significant (at odds of 1 in 150,000 approx) but very heterogeneous.

According to the instructions, this is not acceptable, so I trimmed the outliers until I got the homogenous results.

Also, apologies to people Iíd sent data/notes to. If they looked over my work, I hope they donít feel they wasted their time now that Iíve posted my own figures before hearing their recommendations. Iím still interested in what you have to say Ė if Iíve made any mistakes, let me know!

Next, the inclusion criteria used for putting this database together. Again, any comments are welcome.

No unpublished papers, media demonstrations, or classroom demonstrations
No experiments labelled pilot, informal or exploratory
No papers without pre-defined numbers of trials
No unfinished experiments (one exception Ė Smith, Savvaís 2008 experiment fell just ten short of its target of 128 trials. It felt churlish to exclude it)
Sargentís work at Cambridge excluded due to security issues
No experiments that allow contact between someone knowledgeable about the target and the person judging the session
Only experiments that used random number generators or random number tables to chose the target set and target
Only experiments that had duplicate target sets for judging purposes
Only experiments that used red light as the visual stimulus
Only experiments that used white noise or pink noise (ie, static) as the audio stimulus
No experiments of very short duration (15 minutes or under) or very long (45 minutes or longer)
Only visual targets
Only one target per session

All of these are issues which have been raised by parapsychologists with regard to the ganzfeld. I haven't just plucked them out of the air. In fact, between me only choosing criteria according to the writings of other people, and getting the computer to run the numbers for me, I feel like I've removed myself as much from the process as is possible.

Lastly, I have to admit being amazed at the result. Everything I'd done to date had pointed to odds of around one in ten thousand. When it came out with 1 in 13, I was stunned.

(below - scattergraph of p numbers and trials for the database. P=0.5 is chance. Then two result screens from the program I used -
http://web.fu-berlin.de/gesund/gesu_engl/meta_e.htm )
Attached Images
File Type: jpg ganz homog.JPG (20.7 KB, 19 views)
File Type: jpg homogenous results.JPG (61.1 KB, 34 views)
File Type: jpg effect r homogenous.JPG (65.2 KB, 28 views)
__________________
"Once a man admits complete and unshakeable faith in his own integrity, he is in an excellent frame of mind to be approached by con men." David W. Maurer, "The Big Con"
Ersby is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 8th April 2009, 02:38 AM   #2
Aepervius
Non credunt, semper verificare
 
Join Date: Aug 2003
Location: Sigil, the city of doors
Posts: 14,581
For those of us which are a bit too dumb to udnerstand your screenshot, can you explain ?

Also
Quote:
Lastly, I have to admit being amazed at the result. Everything I'd done to date had pointed to odds of around one in ten thousand. When it came out with 1 in 13, I was stunned.
What is the 1 out ten thousand and what is the 1 out of 13 odd on what ?
Aepervius is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 8th April 2009, 02:58 AM   #3
Ersby
Fortean
 
Join Date: Sep 2001
Posts: 1,872
Basically, I think that the database of all ganzfeld experiments into ESP, when sorted for methodological soundness and standardness of protocol, don't give a statistically significant result. The 65-experiment database has an overall hit rate of 27.4%.

For now Iím limiting myself mostly to p values, because theyíre simple.

The result of p=0.07 translates, roughly into odds of 1 in 13, ie, a 7% chance of these results occurring if there was no effect.

The ďone in ten thousandĒ was what I thought the end result would be before I sat down and started again from scratch. That was based on my previous (erroneous) sums and the work of others. It was quite a rough estimate.

But Wikipedia has a decent page on the subject of p values, including the many misunderstandings. Iím sure Iím making at least one of these mistakes. I hope someone here can point out which one(s).

http://en.wikipedia.org/wiki/P_value
__________________
"Once a man admits complete and unshakeable faith in his own integrity, he is in an excellent frame of mind to be approached by con men." David W. Maurer, "The Big Con"
Ersby is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 8th April 2009, 02:59 AM   #4
cj.23
Master Poster
 
cj.23's Avatar
 
Join Date: Dec 2006
Posts: 2,827
Originally Posted by Ersby View Post
According to the instructions, this is not acceptable, so I trimmed the outliers until I got the homogenous results.
There may be an issue here - but ''ll let those who know what they are talking about deal with that. Certainly trimming would be normal in thsi situation, but I'm not sure... It certainly precludes fraud and massive experimenter error to some extent though!

Originally Posted by Ersby View Post
Next, the inclusion criteria used for putting this database together. Again, any comments are welcome.

No unpublished papers, media demonstrations, or classroom demonstrations
No experiments labelled pilot, informal or exploratory
No papers without pre-defined numbers of trials
No unfinished experiments (one exception Ė Smith, Savvaís 2008 experiment fell just ten short of its target of 128 trials. It felt churlish to exclude it)
Sargentís work at Cambridge excluded due to security issues
No experiments that allow contact between someone knowledgeable about the target and the person judging the session
Only experiments that used random number generators or random number tables to chose the target set and target
Only experiments that had duplicate target sets for judging purposes
Only experiments that used red light as the visual stimulus
Only experiments that used white noise or pink noise (ie, static) as the audio stimulus
No experiments of very short duration (15 minutes or under) or very long (45 minutes or longer)
Only visual targets
Only one target per session

Here I do know what I am talking about, and yes very much a solid list. What happens if you exclude Smith & Savva (2008)? I can't imagine it will make much difference!

Great research, look forward to hearing more on this. You going to publish it?

cj x
__________________
I'm an Anglican Christian, so I declare my prejudice here. Please take it in to account when reading my posts. "Most people would rather die than think: many do." - Betrand Russell

My dull life blogged http://jerome23.wordpress.com
cj.23 is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 8th April 2009, 03:00 AM   #5
cj.23
Master Poster
 
cj.23's Avatar
 
Join Date: Dec 2006
Posts: 2,827
Do you mean p=0.5 is chance? Surely you mean p=0.5 is significance?

cj x
__________________
I'm an Anglican Christian, so I declare my prejudice here. Please take it in to account when reading my posts. "Most people would rather die than think: many do." - Betrand Russell

My dull life blogged http://jerome23.wordpress.com
cj.23 is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 8th April 2009, 03:09 AM   #6
Ersby
Fortean
 
Join Date: Sep 2001
Posts: 1,872
The funny thing about trimming outliers is that I got that technique from the discussion over Milton & Wiseman's meta-analysis. Someone (Radin, I think) said that trimming the outliers made the database (a) homogenous, and (b) significant. But I'd like more input on this topic too.

p=0.05 is significant (at odds of 1 in 20). p=0.5 is chance (odds of 1 in 2).
__________________
"Once a man admits complete and unshakeable faith in his own integrity, he is in an excellent frame of mind to be approached by con men." David W. Maurer, "The Big Con"
Ersby is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 8th April 2009, 03:11 AM   #7
paximperium
Penultimate Amazing
 
paximperium's Avatar
 
Join Date: May 2008
Posts: 10,696
Originally Posted by cj.23 View Post
Do you mean p=0.5 is chance? Surely you mean p=0.5 is significance?

cj x
A p-value means the results are due to chance.

A p=0.5 means a 50% probability that the results were due to random chance.

Most use p=0.05 as the cut-off.
__________________
"The method of science is tried and true. It is not perfect, it's just the best we have. And to abandon it with its skeptical protocols is the pathway to a dark age." -Carl Sagan
"They say a little knowledge is a dangerous thing, but it's not one half so bad as a lot of ignorance."-Terry Pratchett
paximperium is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 8th April 2009, 03:20 AM   #8
cj.23
Master Poster
 
cj.23's Avatar
 
Join Date: Dec 2006
Posts: 2,827
Originally Posted by Ersby View Post
The funny thing about trimming outliers is that I got that technique from the discussion over Milton & Wiseman's meta-analysis. Someone (Radin, I think) said that trimming the outliers made the database (a) homogenous, and (b) significant. But I'd like more input on this topic too.

p=0.05 is significant (at odds of 1 in 20). p=0.5 is chance (odds of 1 in 2).
Sorry, yes I know that - well I should, given my former job! I was being particularly dim and misreading. I have spent the morning working with fractions of fractions since blobru asked about my math on the fundie thread... So yeah, 93% likely to be not chance, so a couple of percent under significance, and quite possibly an artefact. Not significant anyway! Interesting!

Well if you trim the data set, say removing Kathy Dalton's extreme results, you will get a more homogenous data set - and it should work just fine, without distorting at all. Did you use a file drawer correction? If we apply that it may be the whole thing drops to chance.

How however i see a chance to so something really fun. Is there anyt way we can test your data for chronological sequence, ie how do results change over time? I see a continual move towards more and more chance results - perhaps suggesting experimental procedures are improving, or the reality of an expectation Experimenter Effect, or that Loki does not like Ganzfeld anymore. Anyway of checking for a decline effect?
__________________
I'm an Anglican Christian, so I declare my prejudice here. Please take it in to account when reading my posts. "Most people would rather die than think: many do." - Betrand Russell

My dull life blogged http://jerome23.wordpress.com

Last edited by cj.23; 8th April 2009 at 03:25 AM.
cj.23 is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 8th April 2009, 03:27 AM   #9
Ersby
Fortean
 
Join Date: Sep 2001
Posts: 1,872
I can't do much with the data at the moment, since the program I use is on my computer at home.

I do have the excel files, though, and it's interesting to note that the "decline effect" has gone. If anything, results get slightly better. (Again, using p values on the y-axis, bearing in mind a low value indicates better results)
Attached Images
File Type: jpg chronological.JPG (38.7 KB, 10 views)
__________________
"Once a man admits complete and unshakeable faith in his own integrity, he is in an excellent frame of mind to be approached by con men." David W. Maurer, "The Big Con"
Ersby is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 8th April 2009, 03:27 AM   #10
Professor Yaffle
Butterbeans and Breadcrumbs
 
Professor Yaffle's Avatar
 
Join Date: Jan 2007
Location: Emily's shop
Posts: 17,581
Originally Posted by paximperium View Post
A p-value means the results are due to chance.

A p=0.5 means a 50% probability that the results were due to random chance.

Most use p=0.05 as the cut-off.
Not quite. It means that if there were only chance/random effects (null hypothesis is true) you would expect to see results like this once out of 20 times. You can't reverse this and say given a particular set of results, the chance that the null hypothesis is true is one in twenty.
Professor Yaffle is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 8th April 2009, 03:35 AM   #11
cj.23
Master Poster
 
cj.23's Avatar
 
Join Date: Dec 2006
Posts: 2,827
Originally Posted by Ersby View Post
I can't do much with the data at the moment, since the program I use is on my computer at home.

I do have the excel files, though, and it's interesting to note that the "decline effect" has gone. If anything, results get slightly better. (Again, using p values on the y-axis, bearing in mind a low value indicates better results)

lol! Well that is another myth ended then! I do agree your inclusion criteria are exactly what the Ganzfeld advocates have suggested - have you got Ian Humes 2008 study in there? Might not be published yet? It is complete - email him at Coventry ask for it?

cj x
__________________
I'm an Anglican Christian, so I declare my prejudice here. Please take it in to account when reading my posts. "Most people would rather die than think: many do." - Betrand Russell

My dull life blogged http://jerome23.wordpress.com
cj.23 is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 8th April 2009, 06:00 AM   #12
Aepervius
Non credunt, semper verificare
 
Join Date: Aug 2003
Location: Sigil, the city of doors
Posts: 14,581
So .... That mean you get a ~27-28% and if those result were due to chance alone you would expect result like this in 1 chance out of 13 not a p value that confident if I read between the line...

If i read that corectly , then the infamous ganzfeld experiment are much addo to nothing... Or am I completely misinterpreting you ?
Aepervius is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 8th April 2009, 06:30 AM   #13
cj.23
Master Poster
 
cj.23's Avatar
 
Join Date: Dec 2006
Posts: 2,827
Originally Posted by Aepervius View Post
So .... That mean you get a ~27-28% and if those result were due to chance alone you would expect result like this in 1 chance out of 13 not a p value that confident if I read between the line...

If i read that corectly , then the infamous ganzfeld experiment are much addo to nothing... Or am I completely misinterpreting you ?

Chance should give 25%. Over a very large number of trials the result is closer to 27 to 28% above chance, but not at significance - so it may just be "noise", not a real effect. I think that sums it up, but I could be wrong!

cj x
__________________
I'm an Anglican Christian, so I declare my prejudice here. Please take it in to account when reading my posts. "Most people would rather die than think: many do." - Betrand Russell

My dull life blogged http://jerome23.wordpress.com
cj.23 is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 8th April 2009, 06:45 AM   #14
Ersby
Fortean
 
Join Date: Sep 2001
Posts: 1,872
That's how it looks. I'm going over it again, and keep expecting to find a huge mistake that I've overlooked. Nothing yet, but I'll keep looking.

I've not seen the Coventry experiment. I don't know if it'll go in, since I have to draw the line somewhere. As for writing a paper and getting it published, it's possible. I've got something half finished, but it'll need a lot of work to knock it into shape.
__________________
"Once a man admits complete and unshakeable faith in his own integrity, he is in an excellent frame of mind to be approached by con men." David W. Maurer, "The Big Con"
Ersby is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 8th April 2009, 08:05 AM   #15
fls
Penultimate Amazing
 
Join Date: Jan 2005
Posts: 10,226
Now that you've made me feel guilty...

(I have a few excuses - my kitchen/family room/flooring renovations have become far more consuming than I imagined, and the family suffered a prolonged bout of illness throughout February. )

Things I hadn't finished working out:

The extent to which your inclusion/exclusion criteria would be found to be acceptable to parapsychologists. Cj.23 may have given you an answer on that one. I have avoided, as much as possible, discovering the results of the studies so as to be uninfluenced by them when considering inclusion/exclusion. I'm not sure about the Sargent study - the problem is that the discovery of the randomization/security problem was happenstance. Unless you apply that same process of discovery to other researchers, it's an arbitrary exclusion. Are papers presented at a conference, but not subsequently published excluded (I'm thinking of Dalton)?

How did you weight your z-scores? With N?

Whether to use random-effects vs. fixed-effects.

What to do about the heterogeneity. How did you trim it? Just throw away outliers or is that how you came to your exclusion/inclusion criteria? I was going to recommend meta-regression. Then groups could be formed on the basis of significant influences that may be more homogeneous.

How to account for publication bias. I'm of the opinion that it's better to describe something, than to pretend that you can fix it after the fact. Did you do a funnel plot? This may be a moot point without significant results, though.

This is what I can remember off the top of my head. I don't have access to my computer right now.

Linda
fls is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 8th April 2009, 08:25 AM   #16
Ersby
Fortean
 
Join Date: Sep 2001
Posts: 1,872
Originally Posted by fls View Post
Now that you've made me feel guilty...
Success!

Quote:
I'm not sure about the Sargent study - the problem is that the discovery of the randomization/security problem was happenstance. Unless you apply that same process of discovery to other researchers, it's an arbitrary exclusion. Are papers presented at a conference, but not subsequently published excluded (I'm thinking of Dalton)?
Sargent's work was criticised by a number of people, not just Susan Blackmore. When Jessica Utts did a paper using the ganzfeld, she removed Sargent's data. All things considered, I decided to remove them.

But, I should clarify that Sargent's later work, when the method was changed so the security flaw no longer applied, is included.

Papers presented at conferences are included, to guard against successul experiments being written up properly for publicaton, while less successful work is dropped.

Quote:
How did you weight your z-scores? With N?
According to the program's instruction manual, yes.

Quote:
Whether to use random-effects vs. fixed-effects.
I haven't got that far into the manual.

Quote:
What to do about the heterogeneity. How did you trim it? Just throw away outliers or is that how you came to your exclusion/inclusion criteria? I was going to recommend meta-regression. Then groups could be formed on the basis of significant influences that may be more homogeneous.
I just removed outliers until the database became homogenous. According to the program, one index is still showing as heterogenous, but rather than trim a fourth experiment, I decided two out of three indices was good enough.

Quote:
How to account for publication bias. I'm of the opinion that it's better to describe something, than to pretend that you can fix it after the fact. Did you do a funnel plot? This may be a moot point without significant results, though.
I'm fairly sure this isn't an issue.

Quote:
This is what I can remember off the top of my head. I don't have access to my computer right now.

Linda
Thanks.
__________________
"Once a man admits complete and unshakeable faith in his own integrity, he is in an excellent frame of mind to be approached by con men." David W. Maurer, "The Big Con"
Ersby is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 8th April 2009, 02:50 PM   #17
Limbo
Jedi Consular
 
Limbo's Avatar
 
Join Date: Feb 2008
Posts: 3,077
This morning I took the liberty of e-mailing a link to this thread to Jessica Utts, asking her if she could spare a few minutes to glance at it and maybe offer some thoughts. I just got home from work a few minutes ago, and lo! A brief response:

"I didn't read all of the entries, but from the initial analysis it looks like the author made a classic mistake I warn my intro students about on day 3 or so - removing outliers just because they are outliers. In a case like the ganzfeld, it is particularly egregious because there are only likely to be high outliers, not low outliers in the other direction. (Low outliers would mean hit rates much less than 25%, which are not likely to happen.) As an analogy, it's like someone saying that the average fever when people have the flu is only about 100, and not a problem, after removing all temps above 102 from the data set because they are "outliers" and not consistent with the "normal" temperatures. That would, of course, produce an artificially low average. That's what has happened here.

Hope that helps." -Jessica Utts (personal correspondence)
__________________
"Faith in what?" he asked himself, adrift in limbo.

"Faith in faith," he replied. "It isn't necessary to have something to believe in. It's only necessary to believe that somewhere there's something worthy of belief."
Limbo is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 8th April 2009, 06:01 PM   #18
Rodney
Illuminator
 
Join Date: Aug 2005
Posts: 3,942
Originally Posted by Ersby View Post
Basically, I think that the database of all ganzfeld experiments into ESP, when sorted for methodological soundness and standardness of protocol, don't give a statistically significant result. The 65-experiment database has an overall hit rate of 27.4%.

For now Iím limiting myself mostly to p values, because theyíre simple.

The result of p=0.07 translates, roughly into odds of 1 in 13, ie, a 7% chance of these results occurring if there was no effect.]
If the number of trials was 2854 and the hit rate was 27.4%, that means there were 782 hits. With an expected 25% hit rate, the mean number of hits would be only 713.5. The odds of at least 782 hits occurring, using a binomial test, is only 0.16%, or about one chance in 625. So through what sort of statistical legerdemain did you come up with only one chance in 13?
Rodney is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 8th April 2009, 06:05 PM   #19
Rodney
Illuminator
 
Join Date: Aug 2005
Posts: 3,942
Originally Posted by Limbo View Post
This morning I took the liberty of e-mailing a link to this thread to Jessica Utts, asking her if she could spare a few minutes to glance at it and maybe offer some thoughts. I just got home from work a few minutes ago, and lo! A brief response:

"I didn't read all of the entries, but from the initial analysis it looks like the author made a classic mistake I warn my intro students about on day 3 or so - removing outliers just because they are outliers. In a case like the ganzfeld, it is particularly egregious because there are only likely to be high outliers, not low outliers in the other direction. (Low outliers would mean hit rates much less than 25%, which are not likely to happen.) As an analogy, it's like someone saying that the average fever when people have the flu is only about 100, and not a problem, after removing all temps above 102 from the data set because they are "outliers" and not consistent with the "normal" temperatures. That would, of course, produce an artificially low average. That's what has happened here.

Hope that helps." -Jessica Utts (personal correspondence)
Even neglecting the problem of excluding inconvenient outliers, if you e-mail her again ask if she can see any way that a 27.4% hit rate in 2854 trials (with a 25% hit rate expected by chance) can produce a statistically insignificant result.
Rodney is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 8th April 2009, 06:11 PM   #20
Professor Yaffle
Butterbeans and Breadcrumbs
 
Professor Yaffle's Avatar
 
Join Date: Jan 2007
Location: Emily's shop
Posts: 17,581
never mind
Professor Yaffle is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 8th April 2009, 11:10 PM   #21
Ersby
Fortean
 
Join Date: Sep 2001
Posts: 1,872
Originally Posted by Limbo View Post
This morning I took the liberty of e-mailing a link to this thread to Jessica Utts, asking her if she could spare a few minutes to glance at it and maybe offer some thoughts. I just got home from work a few minutes ago, and lo! A brief response:

"I didn't read all of the entries, but from the initial analysis it looks like the author made a classic mistake I warn my intro students about on day 3 or so - removing outliers just because they are outliers. In a case like the ganzfeld, it is particularly egregious because there are only likely to be high outliers, not low outliers in the other direction. (Low outliers would mean hit rates much less than 25%, which are not likely to happen.) As an analogy, it's like someone saying that the average fever when people have the flu is only about 100, and not a problem, after removing all temps above 102 from the data set because they are "outliers" and not consistent with the "normal" temperatures. That would, of course, produce an artificially low average. That's what has happened here.

Hope that helps." -Jessica Utts (personal correspondence)
Interesting. I shall have to research some more on this topic, but I should point out I didn't remove them because they were outliers, but because the database was heterogenous.
__________________
"Once a man admits complete and unshakeable faith in his own integrity, he is in an excellent frame of mind to be approached by con men." David W. Maurer, "The Big Con"
Ersby is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 8th April 2009, 11:13 PM   #22
Ersby
Fortean
 
Join Date: Sep 2001
Posts: 1,872
Originally Posted by Rodney View Post
So through what sort of statistical legerdemain did you come up with only one chance in 13?
I already explained - Excel to give me p numbers, and then some meta-analysis software to crunch the numbers.
__________________
"Once a man admits complete and unshakeable faith in his own integrity, he is in an excellent frame of mind to be approached by con men." David W. Maurer, "The Big Con"
Ersby is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 9th April 2009, 03:29 AM   #23
Ersby
Fortean
 
Join Date: Sep 2001
Posts: 1,872
Okay, from what I can gather, the "random effects model" accounts for outliers without removing them entirely, so over the weekend I'll feed those figures in and see what happens.
__________________
"Once a man admits complete and unshakeable faith in his own integrity, he is in an excellent frame of mind to be approached by con men." David W. Maurer, "The Big Con"
Ersby is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 9th April 2009, 04:48 AM   #24
cj.23
Master Poster
 
cj.23's Avatar
 
Join Date: Dec 2006
Posts: 2,827
Originally Posted by Ersby View Post
Interesting. I shall have to research some more on this topic, but I should point out I didn't remove them because they were outliers, but because the database was heterogenous.
So long as you remove outliers in both directions equally is this an issue? Actually I guess it might be -- it will reduce the effect size? However I can't see it creating this much distortion... I'm more concerned by the obvious issue Rodney points out.

cj x
__________________
I'm an Anglican Christian, so I declare my prejudice here. Please take it in to account when reading my posts. "Most people would rather die than think: many do." - Betrand Russell

My dull life blogged http://jerome23.wordpress.com

Last edited by cj.23; 9th April 2009 at 04:49 AM.
cj.23 is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 9th April 2009, 04:50 AM   #25
cj.23
Master Poster
 
cj.23's Avatar
 
Join Date: Dec 2006
Posts: 2,827
While I am not overly committed to any given outcome, can I just say a huge thank you to Ersby for actually doing all this work in the first place? Thi is real sceptiocism or parapsychology at its best!

cj x
__________________
I'm an Anglican Christian, so I declare my prejudice here. Please take it in to account when reading my posts. "Most people would rather die than think: many do." - Betrand Russell

My dull life blogged http://jerome23.wordpress.com
cj.23 is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 9th April 2009, 05:52 AM   #26
Ersby
Fortean
 
Join Date: Sep 2001
Posts: 1,872
Originally Posted by cj.23 View Post
So long as you remove outliers in both directions equally is this an issue? Actually I guess it might be -- it will reduce the effect size? However I can't see it creating this much distortion... I'm more concerned by the obvious issue Rodney points out.

cj x
My understanding is that the direction isn't important, more how far it deviates from the mean. It just so happens that the three most extreme results were in a positive direction.
__________________
"Once a man admits complete and unshakeable faith in his own integrity, he is in an excellent frame of mind to be approached by con men." David W. Maurer, "The Big Con"
Ersby is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 9th April 2009, 05:55 AM   #27
Ersby
Fortean
 
Join Date: Sep 2001
Posts: 1,872
Originally Posted by cj.23 View Post
Thi is real sceptiocism or parapsychology at its best!

cj x
I appreciate the compliment, but am concerned by the recent deterioration in your typing skills. Are you feeling okay?
__________________
"Once a man admits complete and unshakeable faith in his own integrity, he is in an excellent frame of mind to be approached by con men." David W. Maurer, "The Big Con"
Ersby is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 9th April 2009, 06:05 AM   #28
Rodney
Illuminator
 
Join Date: Aug 2005
Posts: 3,942
Originally Posted by cj.23 View Post
While I am not overly committed to any given outcome, can I just say a huge thank you to Ersby for actually doing all this work in the first place? Thi is real sceptiocism or parapsychology at its best!

cj x
I agree, and I look forward to Ersby receiving his Nobel Prize for [inadvertently] demonstrating the existence of psi in the Ganzfeld.
Rodney is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 9th April 2009, 06:09 AM   #29
Ersby
Fortean
 
Join Date: Sep 2001
Posts: 1,872
Originally Posted by Rodney View Post
I agree, and I look forward to Ersby receiving his Nobel Prize for [inadvertently] demonstrating the existence of psi in the Ganzfeld.
"Inadvertantly"? Pfft. I just want to know what's going on. I'm not trying to demonstrate one thing or the other.

(I especially want to know what's going on with cj.23's typing.)
__________________
"Once a man admits complete and unshakeable faith in his own integrity, he is in an excellent frame of mind to be approached by con men." David W. Maurer, "The Big Con"

Last edited by Ersby; 9th April 2009 at 06:14 AM.
Ersby is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 9th April 2009, 06:18 AM   #30
fls
Penultimate Amazing
 
Join Date: Jan 2005
Posts: 10,226
Originally Posted by Ersby View Post
Sargent's work was criticised by a number of people, not just Susan Blackmore. When Jessica Utts did a paper using the ganzfeld, she removed Sargent's data. All things considered, I decided to remove them.
Okay. The problem with the ganzfeld data is that it is clearly biased to start with. The question is whether or not it is possible to tease out a collection of studies that are less biased, or whether or not the selection process introduces some additional bias. This is critical, since it has already been discovered that the bulk of the measured effect has been due to bias, and what we are trying to discover is whether there is any effect beyond that. Selection criteria that add to, rather than reduce bias need to be avoided. It is likely that studies where security issues are present are biased in a particular direction. As long as it is reasonable to assume (which I think it is) that those studies where security issues are identified by happenstance are more likely to be biased than those studies where they weren't identified, it is reasonable to exclude the Sargent study. And it doesn't look like you would get much argument on this point (another thing that we're trying to avoid).

Quote:
Papers presented at conferences are included, to guard against successul experiments being written up properly for publicaton, while less successful work is dropped.
Okay.

Quote:
According to the program's instruction manual, yes.
It is okay to use any appropriate measure for weighting, is all. People are used to using N or sometimes variance (which in the case of binomial measures with a fixed p-value are essentially the same thing), so it may not occur to them to use something different. Which means that you would have to argue for a different measure. But I don't think that it is the most appropriate measure in this case as the studies vary in other important characteristics (like quality), not just on size. To figure out a better weighting method may be more work than it's worth, though. It doesn't look like there was much difference between weighted and unweighted averages.

Quote:
I haven't got that far into the manual.
There isn't a good answer. Ideally, your assumption should be that you are measuring the same sort of thing in each experiment in the same way, and that the variation you find from study to study is due to chance. In that case, you use a fixed-effects analysis. However, that is clearly not the case here. We don't really know what 'psi' is, the size of the effect, or even whether we are measuring the same thing each time (e.g. clairvoyance vs. telepathy), in addition to the various ways which were chosen to measure the result (e.g. hit-miss vs. rank).

Instead, we could assume that whatever it is that we are measuring is a distribution of effects - varying with respect to what it is we are measuring and varying due to methodological issues. In that case, a random-effects analysis is performed. And that assumption seems more in keeping with what we think we are dealing with. However, there are assumptions made about this analysis that are problematic. A random-effects analysis assumes that the distribution of effects follows a normal distribution, or is at least symmetrical. And in reality the types of bias and 'clinical' variation we find will lead to results that are more likely to be distributed in the upper half. It also takes relatively more information from outliers and small studies than it does from the average, and it is those outliers which are problematic when it comes to bias in the ganzfeld data. In summary, using a random-effects analysis may increase the influence of bias on the results - just what we are trying to avoid.

The heterogeneity of the results is really telling us that we should not be combining the results. A better approach would be to form a homogeneous group - a group that is likely to be measuring the same effect in the same way - and combine that using a fixed-effects analysis.

Quote:
I just removed outliers until the database became homogenous. According to the program, one index is still showing as heterogenous, but rather than trim a fourth experiment, I decided two out of three indices was good enough.
The problem is that your criteria for removing studies may introduce an additional bias. It is okay to remove outliers if they are clearly measuring something different than what the rest of the studies are measuring (Dalton is a good example). But it is not okay to remove them if they simply fall on the extremes. While outliers should be evenly distributed (making it just as likely for you to remove an usually low result as an unusually high result), since the ganzfeld distribution is uneven, I suspect that the removal of outliers leads to the preferential removal of high results. Is that right?

Instead, think of it as trying to remove sources of heterogeneity. One possibility is to add to your exclusion criteria. Remove studies which did not include a sender, consider only studies where participants were drawn from a more general population rather than subgroups (like artists or siblings), stick to a narrow methodology, etc. If you can identify a homogeneous group of studies, then it would be reasonable to perform a meta-analysis on that group.

Despite the technical ability to do so, this data simply should not be combined as is. At best, it can be used to generate hypotheses for testing. At worst, it confirms that the use of ganzfeld to measure psi should be dropped as fruitless.

Linda
fls is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 9th April 2009, 06:19 AM   #31
Rodney
Illuminator
 
Join Date: Aug 2005
Posts: 3,942
Originally Posted by Ersby View Post
"Inadvertantly"? Pfft. I just want to know what's going on. I'm not trying to demonstrate one thing or the other.
What I'm suggesting is that your statistical analysis is flawed. So, while you believe that the data do not demonstrate statistical significance, I believe that they do.
Rodney is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 9th April 2009, 06:46 AM   #32
fls
Penultimate Amazing
 
Join Date: Jan 2005
Posts: 10,226
Originally Posted by cj.23 View Post
So long as you remove outliers in both directions equally is this an issue? Actually I guess it might be -- it will reduce the effect size? However I can't see it creating this much distortion... I'm more concerned by the obvious issue Rodney points out.

cj x
It's not an issue. It's merely an artifact from performing an invalid analysis. If it is questionable as to whether it is reasonable to combine studies, then it is clearly invalid to pool the results of those studies as though they were one large study.

Linda
fls is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 9th April 2009, 07:00 AM   #33
Ersby
Fortean
 
Join Date: Sep 2001
Posts: 1,872
Originally Posted by fls View Post
The heterogeneity of the results is really telling us that we should not be combining the results. A better approach would be to form a homogeneous group - a group that is likely to be measuring the same effect in the same way - and combine that using a fixed-effects analysis.

The problem is that your criteria for removing studies may introduce an additional bias. It is okay to remove outliers if they are clearly measuring something different than what the rest of the studies are measuring (Dalton is a good example). But it is not okay to remove them if they simply fall on the extremes. While outliers should be evenly distributed (making it just as likely for you to remove an usually low result as an unusually high result), since the ganzfeld distribution is uneven, I suspect that the removal of outliers leads to the preferential removal of high results. Is that right?

Instead, think of it as trying to remove sources of heterogeneity. One possibility is to add to your exclusion criteria. Remove studies which did not include a sender, consider only studies where participants were drawn from a more general population rather than subgroups (like artists or siblings), stick to a narrow methodology, etc. If you can identify a homogeneous group of studies, then it would be reasonable to perform a meta-analysis on that group.

Despite the technical ability to do so, this data simply should not be combined as is. At best, it can be used to generate hypotheses for testing. At worst, it confirms that the use of ganzfeld to measure psi should be dropped as fruitless.

Linda
It's interesting that you suggest bringing in more criteria for sorting the experiments.

I did reject a coupe of extra inclusion criteria simply because there was never any consensus on their use in parapsychology. The use of independent judges or the "receiver" themselves judge the trial, or the presence of a sender. These have been touched upon in a few experiments, but with results that never seemed to point to an effect, I always got the impression that the choice over using these protocols was more to do with convenience than any scientific reason.

I guess that the most standard would have the receiver doing the judging, and a sender present consciously sending the image(s). It shouldnít be too difficult to remove those that donít have these, since itís one of the things I made notes on.
__________________
"Once a man admits complete and unshakeable faith in his own integrity, he is in an excellent frame of mind to be approached by con men." David W. Maurer, "The Big Con"
Ersby is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 9th April 2009, 07:35 AM   #34
Ocelot
Illuminator
 
Ocelot's Avatar
 
Join Date: Feb 2007
Location: London
Posts: 3,475
Of course statistical significance does not equal paranormal. It just means that the effects are unlikely to have come about by chance alone.

Other options include inadvertant sensory leakage due to ineffective controls, biases in target randomization, reporting biases, other methodolicial flaws or even outright cheating.
__________________
EDL = English Disco Lovers
Ocelot is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 9th April 2009, 07:49 AM   #35
Ocelot
Illuminator
 
Ocelot's Avatar
 
Join Date: Feb 2007
Location: London
Posts: 3,475
Originally Posted by Rodney View Post
If the number of trials was 2854 and the hit rate was 27.4%, that means there were 782 hits. With an expected 25% hit rate, the mean number of hits would be only 713.5. The odds of at least 782 hits occurring, using a binomial test, is only 0.16%, or about one chance in 625. So through what sort of statistical legerdemain did you come up with only one chance in 13?

It's a good question. The fact of the matter is that the method you describe is not how a meta-analysis is carried out. If it were then there's be no need for the software.

I'm afraid I'm not the right person to tell you how the mata analysis should be carried out. however I can tell you what I suspect.

It would be expected for there to be greater variation of results in the smaller studies. Due to reporting biases you'd expect to see more at the positive end of the scale. As such greater weight is placed upon the larger studies, above and beyond the weight already present from them simply containing more data points. This means that one study with 100 data points showing a particular effect size is far more convincing that ten separate studies containing ten datapoints each even if they totted up to the exact same 100 results.
__________________
EDL = English Disco Lovers
Ocelot is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 9th April 2009, 02:01 PM   #36
fls
Penultimate Amazing
 
Join Date: Jan 2005
Posts: 10,226
Originally Posted by Ersby View Post
It's interesting that you suggest bringing in more criteria for sorting the experiments.

I did reject a coupe of extra inclusion criteria simply because there was never any consensus on their use in parapsychology. The use of independent judges or the "receiver" themselves judge the trial, or the presence of a sender. These have been touched upon in a few experiments, but with results that never seemed to point to an effect, I always got the impression that the choice over using these protocols was more to do with convenience than any scientific reason.

I guess that the most standard would have the receiver doing the judging, and a sender present consciously sending the image(s). It shouldnít be too difficult to remove those that donít have these, since itís one of the things I made notes on.
Right now its as though you took every clinical study on aspirin from primary secondary and tertiary prevention of heart disease and stroke to prevention of colon cancer and put them together. Your statistical tests may show that you cannot draw any reliable conclusions overall but you can if you consider specific questions like secondary prevention of heart disease.

Linda
fls is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 9th April 2009, 02:07 PM   #37
fls
Penultimate Amazing
 
Join Date: Jan 2005
Posts: 10,226
Originally Posted by Ersby View Post
It's interesting that you suggest bringing in more criteria for sorting the experiments.

I did reject a coupe of extra inclusion criteria simply because there was never any consensus on their use in parapsychology. The use of independent judges or the "receiver" themselves judge the trial, or the presence of a sender. These have been touched upon in a few experiments, but with results that never seemed to point to an effect, I always got the impression that the choice over using these protocols was more to do with convenience than any scientific reason.

I guess that the most standard would have the receiver doing the judging, and a sender present consciously sending the image(s). It shouldnít be too difficult to remove those that donít have these, since itís one of the things I made notes on.
Right now its as though you took every clinical study on aspirin from primary secondary and tertiary prevention of heart disease and stroke to prevention of colon cancer and put them together. Your statistical tests may show that you cannot draw any reliable conclusions overall but you can if you consider specific questions like secondary prevention of heart disease.

Linda
fls is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 20th April 2009, 12:46 AM   #38
Ersby
Fortean
 
Join Date: Sep 2001
Posts: 1,872
Well, I find myself at a dead end. Clearly without a deeper understanding of the techniques, I find myself unable to properly interpret the results.

Taking out experiments without someone deliberately sending at the time of the trial, and using only data from subject-judged trials, leaves us with 58 experiments and 2473 trials.

Putting the data through the Random Effect Model gives a highly significant result (z=4.00912, p=0.00003, odds of 1 in 33,333. However, it is still highly hetergenous, and the “amount of variance explained by sampling error” is 3.81%. And it is that figure that I have difficulty with. Is that good? Bad? And if it’s bad, how bad?

In the meantime, I did the same calculation as before on the database, using p numbers and excluding outliers, and found the database was now made comfortably homogenous (p=0.25) after removing just the two most extreme results, with a statistically significant result of z=2.5982, p=0.0046856, or odds of about 1 in 214.
Attached Images
File Type: jpg effect.JPG (70.1 KB, 12 views)
File Type: jpg send3h.JPG (63.1 KB, 7 views)
__________________
"Once a man admits complete and unshakeable faith in his own integrity, he is in an excellent frame of mind to be approached by con men." David W. Maurer, "The Big Con"
Ersby is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 20th April 2009, 07:40 AM   #39
fls
Penultimate Amazing
 
Join Date: Jan 2005
Posts: 10,226
Originally Posted by Ersby View Post
Well, I find myself at a dead end. Clearly without a deeper understanding of the techniques, I find myself unable to properly interpret the results.

Taking out experiments without someone deliberately sending at the time of the trial, and using only data from subject-judged trials, leaves us with 58 experiments and 2473 trials.

Putting the data through the Random Effect Model gives a highly significant result (z=4.00912, p=0.00003, odds of 1 in 33,333. However, it is still highly heterogeneous, and the ďamount of variance explained by sampling errorĒ is 3.81%. And it is that figure that I have difficulty with. Is that good? Bad? And if itís bad, how bad?
Fixed-effects models assume that each study represents a sample drawn from the same population (e.g. estimating the average height of an American by drawing a random sample from the US population over and over again). Random-effects models assume that each sample is drawn from a bunch of select groups within the US, but wonders whether the average height of an American can be estimated from those samples anyway (e.g. drawing a sample of basketball players, of gymnasts, of nursing home residents, of school teachers, etc.). It recognizes that each sample is drawn from a different group and therefore will represent a different average height from another group, but it expects that, in the end, the distribution of average heights for each group (not within groups, but between groups) will be a normal distribution. And that the various groups selected by each study represent a random or representative selection of groups from the overall population.

The distribution of results for a fixed-effects model represents sampling error, so your 'average height' and confidence intervals represents an estimate of the average height of an American plus an indication of the precision of that measurement. The distribution of results for a random-effects model represents both sampling error and variation in those factors which influence the outcome. So the 'average height' isn't an estimate of the average height of an American, but rather a summary of the average height among the groups. What you want to know is to what extent that average can be taken to be an estimate of the US population. And in order to figure that out, you want to know to what extent the groups represent a random sample (the sampling unit is a group, not the individuals within the group) or a representative sample, and to what extent the groups were collected and measured in the same way (i.e. experimental design). As you can see, if the groups are formed based on characteristics which are related to height (basketball players, gymnasts), the average height for each group will be quite different from one group to the next (heterogeneous). If the groups are formed based on characteristics that are unrelated to height (residents of New York, residents of San Francisco), then the average height for each group should be quite similar (homogeneous). If all group members were selected randomly and height was measured with a measuring tape in 1/4 inch increments, the results will be more homogeneous than if only the tallest or shortest members were chosen in each group and they were measured by a yardstick with 1 foot increments. This means that fixed-effects is simply a special case of random-effects where the group formation is not based on a modifying variable and the experimental design was similar.

Going back to the source of variation in a random-effects model then, it represents the variation based on sampling error (same as in the fixed-effects model) plus the variation based on group formation, given that groups were formed based on modifying variables plus variation caused by study design. So you want to remove the variation from sampling error from your model and concentrate on the remaining variation, as that is the variation which may have some explanatory power.

This is a long-winded way of saying that the amount of variance explained by sampling error in your analysis is good. But I was long-winded in order to set you up for the next part.

We're not really just interested in the average height of an American. We want to know if they are taller or shorter than other G8 countries (for example). So if we are attempting to estimate the average height based on different groups, we want to be careful not to select groups that are more likely to be tall (or short), because this won't be a representative sample of groups. We can try to control for this by performing a regression which includes all those variables we can think of that will influence height - i.e. we can control for 'profession' or 'age' or 'nutritional status'. Then we can see whether that leaves us with any height differences and with any unexplained variation. But realistically, it will be difficult to tell, under most circumstances, whether your 'effect size' represents a real difference and your unexplained variation represents your explanation of interest, or whether you simply have a non-representative sample of groups. Similarly we want to try to make it so that all measurements were equivalent, otherwise we won't know whether the unexplained variation is simply due to variation in experimental design.

For the ganzfeld database, if one assumes that there is an effect called 'psi', the descriptions of the studies suggest that the effect should be modified from study to study - either on the basis of modifying characteristics or on the basis of how the effect is measured. And our tests of heterogeneity bear this out. If we want to get an estimate of effect size, then we either need to find our special case - the groups do not vary on the basis of a modifying variable or kind of measurement. Or we need to control for all those factors which modify the effect which we are not interested in (because they wouldn't represent psi).

It was for those reasons that I suggested you try to find a group of homogeneous studies based on the description of the group and measurement. Or that you perform a meta-regression. To be honest, the idea of attempting to control for variation after the fact seems to be a waste of time for anything but generating exploratory hypotheses. And meta-regression is more complicated and often not available on meta-analysis software, so I'd be inclined not to bother with it. Plus, you'd have to go back and extract some more variables from the studies. Plus, if the response to Hyman is anything to go by, the results would be ignored if they eliminated additional unexplained variation.

I was hoping that you'd be able to identify a collection of studies that represent a homogeneous description that would be found to be homogeneous when subject to tests for that characteristic. As it is, we are left with a collection of studies that shows unexplained variation. But in the setting of non-psi sources of variation, we can't really draw any conclusions.

There is another approach that can be used. But it will not be palatable to believers. Instead of estimating an effect size and measuring its probability due to chance (useless because we haven't elucidated the non-chance sources for that effect), we can specify what effect size we would reasonably look for and determine whether the database supports an effect of that size. 'Psi' is supposed to be noticeable to the naked eye. This means at the very least a medium effect size, and more realistically a large effect size. One can conclude that it is highly unlikely that an effect that is visible to the naked eye is present, so that whatever it is that the ganzfeld is measuring, it isn't psi.

Quote:
In the meantime, I did the same calculation as before on the database, using p numbers and excluding outliers, and found the database was now made comfortably homogenous (p=0.25) after removing just the two most extreme results, with a statistically significant result of z=2.5982, p=0.0046856, or odds of about 1 in 214.
Is there someway to remove those outliers (or any others that are appropriate) based on descriptions rather than results?

Linda
fls is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 21st April 2009, 01:29 AM   #40
Ersby
Fortean
 
Join Date: Sep 2001
Posts: 1,872
Thanks for the lengthy but very clear explanation of fixed and random effects models. Much better than the textbooks I'd read.

Originally Posted by fls View Post
'Psi' is supposed to be noticeable to the naked eye. This means at the very least a medium effect size, and more realistically a large effect size. One can conclude that it is highly unlikely that an effect that is visible to the naked eye is present, so that whatever it is that the ganzfeld is measuring, it isn't psi.
I've lost count of the number of times I've read about psi in the lab having a very small effect size, so this may not be an option.

Quote:
Is there someway to remove those outliers (or any others that are appropriate) based on descriptions rather than results?

Linda
I doubt it. Both are pretty straightforward ganzfeld experiments.
__________________
"Once a man admits complete and unshakeable faith in his own integrity, he is in an excellent frame of mind to be approached by con men." David W. Maurer, "The Big Con"
Ersby is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Reply

International Skeptics Forum » General Topics » General Skepticism and The Paranormal

Bookmarks

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump


All times are GMT -7. The time now is 03:42 AM.
Powered by vBulletin. Copyright ©2000 - 2019, Jelsoft Enterprises Ltd.

This forum began as part of the James Randi Education Foundation (JREF). However, the forum now exists as
an independent entity with no affiliation with or endorsement by the JREF, including the section in reference to "JREF" topics.

Disclaimer: Messages posted in the Forum are solely the opinion of their authors.