IS Forum
Forum Index Register Members List Events Mark Forums Read Help

Go Back   International Skeptics Forum » Reference » Forum Spotlight
 


Welcome to the International Skeptics Forum, where we discuss skepticism, critical thinking, the paranormal and science in a friendly but lively way. You are currently viewing the forum as a guest, which means you are missing out on discussing matters that are of interest to you. Please consider registering so you can gain full use of the forum features and interact with other Members. Registration is simple, fast and free! Click here to register today.
Tags pear , esp flimflam , statistics , testing

Reply
Old 1st March 2007, 01:12 PM   #1
Baron Samedi
Critical Thinker
 
Baron Samedi's Avatar
 
Join Date: Dec 2006
Posts: 476
How to manipulate data, get results as good as PEAR, and be $1,000,000 richer

Since the news of PEAR shutting down came out, I've been reading over some of the news reports and quotes given by the key players. One thing that struck me was their pure dogmatic belief that they have proof beyond a shadow of a doubt.

For example, in their test with a 50/50 outcome, they claim that they have statistically significant results to show that ESP exists. The overall effect is only 0.5003 vs. 0.5000, but they have the statistics to show that this difference is not due to random chance alone. My first question that came to mind was exactly how many experiments needed to be run to even get "statistically significant results"? Time to break out my second year stats course notes:

To test P_observed vs. 0.5, use the formula
(P_observed - 0.5)/sqrt(0.5*0.5/n) > Z
where Z is your critical value for alpha (Type I error) on a one sided test. Knowing this, we can solve for the minimum value of n such that the inequality holds true, assuming that P_observed = 0.5003 as they stated. Since we don't know what alpha level they used, I have to pick some common ones:

alpha = 0.1; z=1.28; n=45,622
alpha = 0.05; z=1.645; n=75,154
alpha = 0.01; z=2.326; n=150,330
alpha = 0.001; z=3.09; n=265,268

So if I ever want to replicate and test their results myself, I'm going to have to try roughly 50,000 tests before I start to see some kind of statistical evidence above chance? Imagine trying for the million dollar prize with this number? It's impossible. This is why the people at PEAR will not touch Randi's prize. If their results are indeed true, and Randi runs a tight test, then the preliminary test will have about 250,000 trials of guessing "Good" or "Bad"? I don't think we could find anyone to volunteer a year's worth of their life full time sitting in on this exam.

Someone raised an interesting point. What if the claimant says that their psychic ability lets them go from 50% chance of being right to 60% chance of being right. How many tests do you need to run to prove to Randi that the ability exists? 6 out of 10 right if laughable. 30 out of 50? Still not enough. 60 out of 100? We're getting there. 150 out of 250? You may start to make believers out of us. But as you can see, people complain that Randi keeps raising the bar, and that if you pass 50, he'll ask for 100. If you pass 100, he'll ask for 250. They think and accuse Randi of cheating. This is wrong.

The answer to this is in the Power test. This is a statistical calculation that asks: If the person has ability of being 60% right, how many trials must the person do so they pass our test 80% of the time. People have good days and bad days, so we want to make sure that even on a bad day and the person's running 58% that they will pass.

So for the power calculation, we have:
Random: 50%
Claimant: 60%
Randi's alpha level: 0.001
Now we solve for n. After struggling for a bit and running some simulations, I have a calculation of 385 trials needed. Because of this test, the scenario is set. If the claimant says their ability is 60%, then 385 trials will be done, no more, no less. This may sound like overkill to a person ("Ah, but I tried it 50 times at home! Why do I need to do it 385 times?"), but that's the way it needs to be done.

Here's the key thing to notice, and where jury rigging the numbers comes in. I said that 385 tests, no more, no less, have to be done. In practice, how often is this really done? Let's say that on one test case, the person is successful 30 out of 40 times. The p-value in this case is indeed less than 0.001. Should you stop or should you go on? According to my rules, you have to continue. Most people, though, will say that the sitter has proven themselves, so we have to quit. There's no need to go on and waste everybody's time if we've "proven the ability" after only 40 tests? This, while looking naive and simplifying the process, is the fatal flaw in these studies.

An alpha level means the chances of Type I Error, or the probability of saying that some effect exists when truly there is nothing and this was due to randomness. Therefore, if I run 10,000 trials using an alpha level of 0.001, I should see roughly 10 of these having positive results. Running off of randomly simulated data, this matches. Sometimes I get 8 successes, sometimes 16, but usually its around 10.

Now I tried running simulations to show what usually happens due to human intervention. For each person sitting to test for psychic abilities, I started off with 30 guesses. If the current p-value < 0.001, I stop and call the person potentially gifted. Otherwise, I'll try one more coin flip. Is the person's hit rate high enough now? If not, try one more flip. If I reach 385 flips and the person still hasn't shown powers beyond a shadow of a doubt, I finally call it a day.

In each individual case, I have a p-value < 0.001 for all potentially psychic people. If this effect is due to random noise, again we should see only about 10 potential hits. In my simulated data, I am now getting, on average, 116 potential psychics. Remember, this data is indeed pure random garbage. Now is time to start spinning the results and throwing in statistical jargon:

10,000 trials were tried at an alpha level of 0.001. Assuming a typical binomial distribution for the trials, one would expect a mean of 10 trial successes. Of all trials, 116 showed a success. A typical hypothesis test for these levels shows a z-score of 47.46, which relates to a probability infinitesimally small (p << 0.0001). This is clear proof than an overall effect is harboured in human potential. Overall, the hit rate on the entire population is 0.5014 which, albeit slightly above chance, can be explained by performance fatigue, negative ESP skills (abnormally incorrect reading ability), and general skepticism in the total population.

Using this same kind of faulty reasoning, I'm fairly confident that I can produce just as significant results with only 200 sittings. With 200 people, and this lousy procedure, I should be able to get 3 "psychic" people and show that 3/200 is absolute proof of ESP.

So now that I've blown my chance for the $1,000,000, do any of you know of a good "Woo" journal in case I want to pull off another Sokal scam?
Baron Samedi is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 1st March 2007, 01:18 PM   #2
Kage
Critical Thinker
 
Kage's Avatar
 
Join Date: May 2006
Posts: 455
AWESOME. You have made my day.

Kage
__________________
After victory, tighten your helmet strings. Oda Nobunaga.
Kage is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 1st March 2007, 01:57 PM   #3
Zygar
Master Poster
 
Join Date: Oct 2006
Posts: 2,606
PEAR got good results?


Actually, this is very good data! Thanks.
Zygar is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 1st March 2007, 02:34 PM   #4
Kage
Critical Thinker
 
Kage's Avatar
 
Join Date: May 2006
Posts: 455
Originally Posted by Zygar View Post
PEAR got good results?


Actually, this is very good data! Thanks.
PEAR did have a high Jungian philosophy conjecture to published work ratio.
__________________
After victory, tighten your helmet strings. Oda Nobunaga.
Kage is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 1st March 2007, 02:43 PM   #5
CFLarsen
Penultimate Amazing
 
Join Date: Aug 2001
Posts: 42,371
Originally Posted by Baron Samedi View Post
I don't think we could find anyone to volunteer a year's worth of their life full time sitting in on this exam.
Yes, we could.

Think about it. These people have spent years - decades - of their lives doing these tests. They claim that they do have positive evidence.

So, yes, we could actually find someone to volunteer a year's worth of their life full time sitting in on this exam.

Provided they really believed that they would get a positive result.

But they don't volunteer. Because they know that they don't have this evidence.

These tests cannot have taken all that much time. Maybe a couple of hours each day? Even if 8 hours was spent on nothing, the rest of the time would be spent on...whatever you want.

It's the paranormal gravy train.
CFLarsen is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 1st March 2007, 04:09 PM   #6
Baron Samedi
Critical Thinker
 
Baron Samedi's Avatar
 
Join Date: Dec 2006
Posts: 476
Originally Posted by CFLarsen View Post
Yes, we could.

Think about it. These people have spent years - decades - of their lives doing these tests. They claim that they do have positive evidence.

So, yes, we could actually find someone to volunteer a year's worth of their life full time sitting in on this exam.

Provided they really believed that they would get a positive result.

But they don't volunteer. Because they know that they don't have this evidence.

These tests cannot have taken all that much time. Maybe a couple of hours each day? Even if 8 hours was spent on nothing, the rest of the time would be spent on...whatever you want.

It's the paranormal gravy train.
Ah, I should have stated more clearly... When I asked if we could find someone to sit in for a year, I should have said, "I don't think the JREF could find someone to waste a year of their life volunteering to oversee some woo perform this kind of a test." If I'm the one who's going to win (or try to scam) the $1,000,000, sure I'll do it. But the JREF people have lives to lead. A year for a preliminary test? And no one pays the referees for any of their time? And to sit through 250,000 dice throws and be totally aware and looking out for any kind of cheating? The JREF volunteer would snap for certain.
Baron Samedi is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 1st March 2007, 04:19 PM   #7
Baron Samedi
Critical Thinker
 
Baron Samedi's Avatar
 
Join Date: Dec 2006
Posts: 476
Originally Posted by Kage View Post
AWESOME. You have made my day.

Kage
Glad you liked it. I still would love to be able to try to pass this off to a woo journal and see if they buy it as real and concrete proof. I'm in a spiteful and vengeful mood today.

Originally Posted by Zygar View Post
PEAR got good results?

Actually, this is very good data! Thanks.

Well, sure! You look hard enough, you'll always get good results. Or as I was taught in school, "If you torture the data enough, it will confess." And, of course, all good statisticians hang out at S&M bars. It's true!
Baron Samedi is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 2nd March 2007, 01:27 AM   #8
Ersby
Fortean
 
Join Date: Sep 2001
Posts: 1,881
I don't know enough about the RNG experiments to say whether this optimal stopping actually occured.

Another way figures could be inflated is if a subject is expected to do a set number of trials. If they start badly and get disheartened, they are much more likely to drop out compared to those who just so happen to start well. Since they don't complete their pre-set number of trials, their data maybe removed from the results, skewing the results in a positive direction.
__________________
"Once a man admits complete and unshakeable faith in his own integrity, he is in an excellent frame of mind to be approached by con men." David W. Maurer, "The Big Con"
Ersby is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 2nd March 2007, 02:10 AM   #9
Alan Heap
Scholar
 
Alan Heap's Avatar
 
Join Date: Jan 2007
Posts: 86
very nice write up. this stuff should be obvious, yet it's ridiculous how many qualified individuals and professionals overlook this stuff, whether it's by choice, ignorance or stupidity.
Alan Heap is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 2nd March 2007, 03:56 AM   #10
Baron Samedi
Critical Thinker
 
Baron Samedi's Avatar
 
Join Date: Dec 2006
Posts: 476
Originally Posted by Ersby View Post
I don't know enough about the RNG experiments to say whether this optimal stopping actually occured.

Another way figures could be inflated is if a subject is expected to do a set number of trials. If they start badly and get disheartened, they are much more likely to drop out compared to those who just so happen to start well. Since they don't complete their pre-set number of trials, their data maybe removed from the results, skewing the results in a positive direction.
That was the whole reason I wanted to try the simulations. In reading up on how these studies are flawed, people always mention the floating starting point issue. "Oh no, 5 misses right off the bat? Is the equipment working? Do you need more time to warm up? Let's start again." Then there are people who lie, who claim that the person being tested started to tire, and their last number of results were tossed out. But this one is easier to catch, since all data must be reported, and you can simply ask why these observations weren't used. I wanted to know exactly what happens if we're free to stop whenever we chose to, and how that may skew results. It's bigger than I thought. And in my case, I am using each and every observation, I didn't delete a single one.

I did work out a nice catch to see if this jury-rigging is going on. All of my positive tests were between 30 to 385 observations. All of my negative tests had exactly 385. We can look to see if the successes have, on average, a much lower number of trials than the misses in the study. If so, we've caught them. This should work with your example of inflation as well.
Baron Samedi is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 2nd March 2007, 04:21 AM   #11
fls
Penultimate Amazing
 
Join Date: Jan 2005
Posts: 10,226
Originally Posted by Baron Samedi View Post
Here's the key thing to notice, and where jury rigging the numbers comes in. I said that 385 tests, no more, no less, have to be done. In practice, how often is this really done? Let's say that on one test case, the person is successful 30 out of 40 times. The p-value in this case is indeed less than 0.001. Should you stop or should you go on? According to my rules, you have to continue. Most people, though, will say that the sitter has proven themselves, so we have to quit. There's no need to go on and waste everybody's time if we've "proven the ability" after only 40 tests? This, while looking naive and simplifying the process, is the fatal flaw in these studies.
That was a nice demonstration of how a systematic bias can create pattern from randomness. If you have a collection of random data and use randomness to look for a pattern, no pattern emerges. If you use a pattern to look for a pattern, then a pattern emerges.

Do you know whether or not this particular flaw was present in the PEAR data - i.e. did individual operators stop early? I thought they were blinded as to the results.

ETA: Oops, already asked and answered, I see.
Linda
fls is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 2nd March 2007, 04:21 AM   #12
Ersby
Fortean
 
Join Date: Sep 2001
Posts: 1,881
Originally Posted by Baron Samedi View Post
We can look to see if the successes have, on average, a much lower number of trials than the misses in the study. If so, we've caught them.
That's interesting. With regards to the ganzfeld experiments and the RNG work the most successful experiments were the shortest.
__________________
"Once a man admits complete and unshakeable faith in his own integrity, he is in an excellent frame of mind to be approached by con men." David W. Maurer, "The Big Con"
Ersby is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 2nd March 2007, 04:38 AM   #13
CFLarsen
Penultimate Amazing
 
Join Date: Aug 2001
Posts: 42,371
Originally Posted by Baron Samedi View Post
Ah, I should have stated more clearly... When I asked if we could find someone to sit in for a year, I should have said, "I don't think the JREF could find someone to waste a year of their life volunteering to oversee some woo perform this kind of a test." If I'm the one who's going to win (or try to scam) the $1,000,000, sure I'll do it. But the JREF people have lives to lead. A year for a preliminary test? And no one pays the referees for any of their time? And to sit through 250,000 dice throws and be totally aware and looking out for any kind of cheating? The JREF volunteer would snap for certain.
Absolutely agree.
CFLarsen is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 2nd March 2007, 07:16 AM   #14
Baron Samedi
Critical Thinker
 
Baron Samedi's Avatar
 
Join Date: Dec 2006
Posts: 476
Originally Posted by fls View Post
Do you know whether or not this particular flaw was present in the PEAR data - i.e. did individual operators stop early? I thought they were blinded as to the results.

ETA: Oops, already asked and answered, I see.
Linda
Linda,

I have absolutely no clue. I just thought that people looking into PEAR's methodology were blinded to PEAR's results. But seriously, I thought that PEAR was running the trials where a computer would generate a number, create the Good/Bad image, and the person in the other room had to guess the image. Both teams of people could be blind of each other, but they are both being recorded by the computer and results tied in. If there is a third person who sees these live results, they can stop the test early if they deem it sucessful and yet still claim to have performed a double-blind experiment.

But this is just me guessing and trying to find ways of cheating and lying and yet coming up with proof beyond a shadow of a doubt. I have no proof of PEAR doing just this, just like Randi has no proof that Yuri bends spoons with his hands. I do have a sneaky method which creates very similar results to PEAR.

You've seen more medical/psychological stats tests than I have. Do you know if this type of test on the alpha levels is ever done to test H0?

Originally Posted by Ersby View Post
That's interesting. With regards to the ganzfeld experiments and the RNG work the most successful experiments were the shortest.
?! Really?! How short is shortest???
Baron Samedi is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 2nd March 2007, 07:25 AM   #15
Ersby
Fortean
 
Join Date: Sep 2001
Posts: 1,881
Originally Posted by Baron Samedi View Post
?! Really?! How short is shortest???
Well, the best ganzfeld results tend to be in series of twenty trials or less.

As for the RNG stuff, I don't know. I read it a few times, most recently in an abstract from a recent meta-analysis into the subject:

Examining Psychokinesis: The Interaction of Human Intention With Random Number Generators--A Meta-Analysis
Bosch, Steinkamp, Boller
Psychological Bulletin, Vol. 132, No. 4. (July 2006), pp. 497-523
"Seance-room and other large-scale psychokinetic phenomena have fascinated humankind for decades. Experimental research has reduced these phenomena to attempts to influence (a) the fall of dice and, later, (b) the output of random number generators (RNGs). The meta-analysis combined 380 studies that assessed whether RNG output correlated with human intention and found a significant but very small overall effect size. The study effect sizes were strongly and inversely related to sample size and were extremely heterogeneous. A Monte Carlo simulation revealed that the small effect size, the relation between sample size and effect size, and the extreme effect size heterogeneity found could in principle be a result of publication bias."


(my emphasis)
__________________
"Once a man admits complete and unshakeable faith in his own integrity, he is in an excellent frame of mind to be approached by con men." David W. Maurer, "The Big Con"
Ersby is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 2nd March 2007, 07:47 AM   #16
Baron Samedi
Critical Thinker
 
Baron Samedi's Avatar
 
Join Date: Dec 2006
Posts: 476
Originally Posted by Ersby View Post
Well, the best ganzfeld results tend to be in series of twenty trials or less.

As for the RNG stuff, I don't know. I read it a few times, most recently in an abstract from a recent meta-analysis into the subject:

Examining Psychokinesis: The Interaction of Human Intention With Random Number Generators--A Meta-Analysis
Bosch, Steinkamp, Boller
Psychological Bulletin, Vol. 132, No. 4. (July 2006), pp. 497-523
"Seance-room and other large-scale psychokinetic phenomena have fascinated humankind for decades. Experimental research has reduced these phenomena to attempts to influence (a) the fall of dice and, later, (b) the output of random number generators (RNGs). The meta-analysis combined 380 studies that assessed whether RNG output correlated with human intention and found a significant but very small overall effect size. The study effect sizes were strongly and inversely related to sample size and were extremely heterogeneous. A Monte Carlo simulation revealed that the small effect size, the relation between sample size and effect size, and the extreme effect size heterogeneity found could in principle be a result of publication bias."


(my emphasis)
So people should be aware of this, even though this covers a meta-analysis. If a person well versed in psychology experiments falls for my false arguement, then they haven't done their due dilligence?
Baron Samedi is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 2nd March 2007, 08:47 AM   #17
fls
Penultimate Amazing
 
Join Date: Jan 2005
Posts: 10,226
Originally Posted by Baron Samedi View Post
You've seen more medical/psychological stats tests than I have. Do you know if this type of test on the alpha levels is ever done to test H0?
You know, that's an interesting question considering the thread on HIV/circumcision.

It does happen sometimes in medical trials. If there is a good reason to think the intervention will be effective, in order to mitigate the questionable ethics of withholding an effective treatment in a placebo-controlled trial, often a separate review board is set up to review the data at fixed intervals while the trial is still ongoing. If the strength of the data exceeds a certain threshold, they will recommend the trial be stopped so that the intervention can be offered to all.

Hmmm....

The difference is that these situations are generally pre-selected, rather than post-hoc (e.g. it would be like pre-selecting only the prime-numbered trials out of your 10,000 to end early if the alpha is less than 0.001).

Linda
fls is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 2nd March 2007, 09:03 AM   #18
J. Arthur Hastur
Critical Thinker
 
J. Arthur Hastur's Avatar
 
Join Date: Jan 2006
Posts: 495
It's like statistics all over again. I have a headache.
__________________
The Jews, the Muslims and the Christians, They've all got it wrong. The people of the world only divide into two kinds, One sort with brains who hold no religion, The other with religion and no brain.
- Abu-al-Ala al-Marri, 10th century Syrian poet
J. Arthur Hastur is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 2nd March 2007, 10:46 AM   #19
Baron Samedi
Critical Thinker
 
Baron Samedi's Avatar
 
Join Date: Dec 2006
Posts: 476
Originally Posted by fls View Post
You know, that's an interesting question considering the thread on HIV/circumcision.

It does happen sometimes in medical trials. If there is a good reason to think the intervention will be effective, in order to mitigate the questionable ethics of withholding an effective treatment in a placebo-controlled trial, often a separate review board is set up to review the data at fixed intervals while the trial is still ongoing. If the strength of the data exceeds a certain threshold, they will recommend the trial be stopped so that the intervention can be offered to all.

Hmmm....

The difference is that these situations are generally pre-selected, rather than post-hoc (e.g. it would be like pre-selecting only the prime-numbered trials out of your 10,000 to end early if the alpha is less than 0.001).

Linda
Here we go. I tried your idea and ran a new simulation. I tested at 30, 386, and all of the primes inbetween, and only at these points if my observed Z > 3.1 do I stop and say it's a success with no need to continue. I also bumped it up to 1,000,000 simulated reps, just to make sure I get a good estimate.

Unrestricted (re-evaluate after each observation)
Success: 11,587
Failed: 988,413

Restricted (re-evaluate at primes)
Success: 9,216
Failed: 990,784

Doing that prime number restriction just dropped the result down to 0.92%. So with 400 subjects, each one testing between 30 to 386 guesses, I have a great chance of getting 3 "psychics", maybe more.
Baron Samedi is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 2nd March 2007, 11:57 AM   #20
fls
Penultimate Amazing
 
Join Date: Jan 2005
Posts: 10,226
Originally Posted by Baron Samedi View Post
Here we go. I tried your idea and ran a new simulation. I tested at 30, 386, and all of the primes inbetween, and only at these points if my observed Z > 3.1 do I stop and say it's a success with no need to continue. I also bumped it up to 1,000,000 simulated reps, just to make sure I get a good estimate.

Unrestricted (re-evaluate after each observation)
Success: 11,587
Failed: 988,413

Restricted (re-evaluate at primes)
Success: 9,216
Failed: 990,784

Doing that prime number restriction just dropped the result down to 0.92%. So with 400 subjects, each one testing between 30 to 386 guesses, I have a great chance of getting 3 "psychics", maybe more.
I'm sorry. Now I feel bad that you did all that work. I didn't give an adequate description of what I meant. That result is not at all surprising (except that I'm surprised it dropped as much as it did).

I meant that out of your 10,000 (or 1,000,000) trials, the only trials that you start testing for signficance after 30 guesses would be trials number 2, 3, 5, 7, 11, 13, 17, etc. And I chose prime numbers for an example, but I don't know whether or not the proportion of numbers that are prime numbers is equal to the proportion of trials subject to early review (I suspect using prime numbers is way too high - something like squares of whole numbers may be closer).

Linda
fls is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 2nd March 2007, 01:15 PM   #21
Baron Samedi
Critical Thinker
 
Baron Samedi's Avatar
 
Join Date: Dec 2006
Posts: 476
Originally Posted by fls View Post
I'm sorry. Now I feel bad that you did all that work. I didn't give an adequate description of what I meant. That result is not at all surprising (except that I'm surprised it dropped as much as it did).

I meant that out of your 10,000 (or 1,000,000) trials, the only trials that you start testing for signficance after 30 guesses would be trials number 2, 3, 5, 7, 11, 13, 17, etc. And I chose prime numbers for an example, but I don't know whether or not the proportion of numbers that are prime numbers is equal to the proportion of trials subject to early review (I suspect using prime numbers is way too high - something like squares of whole numbers may be closer).

Linda
Sadly, it's actually fun for me to do and try these sims. I need a life.

Perhaps I'm missing something here. I think I follow you that we only have the option to stop on the trial level, and not the tester level. In real life, then, I don't see how this can work. If we look at the AIDS in Africa case, we may only have one trial. One doctor sets up the study, patients come in, are given either drug or placebo, and are tested at time=t. The patient is my coin flip. The doctor, if p << 0.001, will stop the trial early and publish results.

In the coin flip/psychic case, I'm going to have 10,000 people come in to be tested, each with 385 guesses to make. Your suggestion states that most of these people have to have the full 385. I'm suggesting that even though the rule may be in place, it may not be observed. In fact, looking at my data, 99% of the people did go the full 385 tests. So someone may innocently believe that stopping early should have no major impact. Time is money, money is time, people have lives, and why keep trying more and more tests when a person is clearly showing better than random results. How many testers would be honest enough to continue?
Baron Samedi is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 4th March 2007, 11:15 AM   #22
fls
Penultimate Amazing
 
Join Date: Jan 2005
Posts: 10,226
Originally Posted by Baron Samedi View Post
Perhaps I'm missing something here. I think I follow you that we only have the option to stop on the trial level, and not the tester level. In real life, then, I don't see how this can work. If we look at the AIDS in Africa case, we may only have one trial. One doctor sets up the study, patients come in, are given either drug or placebo, and are tested at time=t. The patient is my coin flip. The doctor, if p << 0.001, will stop the trial early and publish results.
I was thinking of the body of medical research as representing a large number of trials, some of which (like the HIV/circumcision trial) have the p value calculated before the trial is finished and will stop early if p is less than their cut-off.

So one could attempt to argue that the benefit of conventional medical treatment (taken as a whole) has been exaggerated by the presence of a systematic bias, just like you argued that the actual number of psychics is exaggerated by the presence of a systematic bias (i.e. 3 psychics out of 200 when you'd expect none).

Quote:
In the coin flip/psychic case, I'm going to have 10,000 people come in to be tested, each with 385 guesses to make. Your suggestion states that most of these people have to have the full 385. I'm suggesting that even though the rule may be in place, it may not be observed. In fact, looking at my data, 99% of the people did go the full 385 tests. So someone may innocently believe that stopping early should have no major impact. Time is money, money is time, people have lives, and why keep trying more and more tests when a person is clearly showing better than random results. How many testers would be honest enough to continue?
I don't think it's a matter of honesty, but rather a matter of whether the effect of the bias is recognized - something that's easy to miss when you tend to focus only on your own trial. It's a more complicated concept than a single test.

Linda
fls is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 7th March 2007, 07:37 AM   #23
Cuddles
Penultimate Amazing
 
Join Date: Jul 2006
Posts: 18,795
As an interesting thought that just popped into my head, there are often reports of medical trials that have been stopped early because of obvious negtive effects. It is possible that at least some of these could be due to this effect and not actually due to the treatment? Of course, if this does happen it would be false negatives rather than false positives, and with experimental drugs it is always better safe than sorry, but it would be interesting to know if we could be losing potential treatments due to this effect.
Cuddles is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 7th March 2007, 02:11 PM   #24
Baron Samedi
Critical Thinker
 
Baron Samedi's Avatar
 
Join Date: Dec 2006
Posts: 476
Originally Posted by Cuddles View Post
As an interesting thought that just popped into my head, there are often reports of medical trials that have been stopped early because of obvious negtive effects. It is possible that at least some of these could be due to this effect and not actually due to the treatment? Of course, if this does happen it would be false negatives rather than false positives, and with experimental drugs it is always better safe than sorry, but it would be interesting to know if we could be losing potential treatments due to this effect.
That's an interesting idea. I'll have to think about this some more, but I don't it would be as big of an issue. In the first case, you're looking for a high side effect, and you stop on a high side effect. In this case, you're looking for a high side effect, but stop on a low side effect. I think it's a wash.

For example, the tests are usually for no difference between drug A on the market and drug B which you just developed. You're thinking that we stop too early and say B is worse when in fact B is better. So if observed - expected is too small, we stop. This is assuming that expected is 0. If B is in fact better, then expected should be +5 or +10. So:

Z obs: (observed - 0)/se < -3.1 we stop, and p inflates from 0.001 to 0.02
We stop in error when: (observed - +i)/se < -3.1. However, the chances of making this mistake are much smaller than Z obs.

Or let's think of 100 coin tosses. I'll stop if the person get's 40% correct or worse. Therefore, the chances of me stopping the test assuming no ability are: Z = (.4 - .5)/(0.5/10) = -2, therefore p=2.5%
However, if the person really and truly had ESP and was up to 60% success rate, the probability that I would stop and say no evidence is now
Z = (.4 - .6)/sqrt(.6*.4/100) = -4, therefore p << 0.1%

I made my own head now. Ouch.
Baron Samedi is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Reply

International Skeptics Forum » Reference » Forum Spotlight

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is On
Forum Jump


All times are GMT -7. The time now is 10:44 PM.
Powered by vBulletin. Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.

This forum began as part of the James Randi Education Foundation (JREF). However, the forum now exists as
an independent entity with no affiliation with or endorsement by the JREF, including the section in reference to "JREF" topics.

Disclaimer: Messages posted in the Forum are solely the opinion of their authors.