IS Forum
Forum Index Register Members List Events Search Today's Posts Mark Forums Read Help

Go Back   International Skeptics Forum » General Topics » General Skepticism and The Paranormal
 


Welcome to the International Skeptics Forum, where we discuss skepticism, critical thinking, the paranormal and science in a friendly but lively way. You are currently viewing the forum as a guest, which means you are missing out on discussing matters that are of interest to you. Please consider registering so you can gain full use of the forum features and interact with other Members. Registration is simple, fast and free! Click here to register today.
Tags Daryl Bem , Louie Savva , parapsychology

Reply
Old 13th December 2006, 11:08 AM   #81
psp02ls
Scholar
 
Join Date: Mar 2006
Posts: 73
Originally Posted by psp02ls View Post
It is odd that this topic has turned into a discussion of methodology.
I guess I meant specifically of precognitive habituation. It isn't really deserving of much discussion.
psp02ls is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 13th December 2006, 11:37 AM   #82
davidsmith73
Graduate Poster
 
Join Date: Jul 2001
Posts: 1,697
Originally Posted by psp02ls View Post
It is odd that this topic has turned into a discussion of methodology. It certainly does not merit much more discussion.

David. If you are just going to believe every thing that is written in a journal or proceedings, then you are going to fall for a lot of rubbish (as I did). Don't believe everything you read.
Do you mean that some write-ups do not describe what actually went on during the experiments?

How are we able to tell which experimenters are being dishonest?

When I said I have to believe what is written in the journals, I meant that I have to believe that the researchers are being accurate and sincere in describing their research. I don't mean that I automatically accept their conclusions or that I don't question methodology.

Last edited by Paul C. Anagnostopoulos; 13th December 2006 at 12:03 PM. Reason: fix tag
davidsmith73 is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 13th December 2006, 11:40 AM   #83
psp02ls
Scholar
 
Join Date: Mar 2006
Posts: 73
Originally Posted by davidsmith73 View Post
How are we able to tell which experimenters are being dishonest?
First I would discount anybody who states that psi is established!
psp02ls is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 13th December 2006, 11:44 AM   #84
davidsmith73
Graduate Poster
 
Join Date: Jul 2001
Posts: 1,697
From the 2003 Bem paper on PH:

"At this point, I asked a skeptical colleague at Williams College, Professor Kenneth Savitsky, to try replicating the PH effect using supraliminal exposures. But I made two critical changes: First, the on-screen directions explicitly instructed the participant to “keep your eyes on the picture as it is flashed—even if it is one of the unpleasant pictures.” Second, participants
were given the option of participating in the study without the negative pictures. (There were no erotic trials in the Williams replication.) Savitsky conducted the experiment as a class exercise in a laboratory course in
experimental social psychology. Serving as the experimenter, he ran himself and the 17 students in the experiment; each student was then instructed to run 4 of his or her friends. This produced a total of 87 participants, 84 of whom experienced the negative trials. Collectively they obtained a
hit rate of 52.5% (t(83) = 1.57, p = .061) on the negative trials. More importantly, the positive correlation between hit rate and Emotional Reactivity was restored: The 32 emotionally reactive participants obtained a hit rate of 56.0%, t(31) = 2.66, p = .006. In particular, the 12 emotionally
reactive men in the sample achieved a very high hit rate of 59.7%, t(11) = 3.02, p = .006. The hit rate on the low-affect trials was at chance."


Does anyone know if this study has been published?
davidsmith73 is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 13th December 2006, 11:45 AM   #85
CFLarsen
Penultimate Amazing
 
Join Date: Aug 2001
Posts: 42,371
Originally Posted by davidsmith73 View Post
When I said I have to believe what is written in the journals, I meant that I have to believe that the researchers are being accurate and sincere in describing their research. I don't mean that I automatically accept their conclusions or that I don't question methodology.
.....why?

If you accept their word that what they are telling you they are doing is accurate and sincere, why would you not accept their word that their conclusions or methodology are sound?

Once you take people's word for granted, you throw out everything else. You admit that you are a hardcore, blind believer.
CFLarsen is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 13th December 2006, 11:48 AM   #86
CFLarsen
Penultimate Amazing
 
Join Date: Aug 2001
Posts: 42,371
Originally Posted by davidsmith73 View Post
From the 2003 Bem paper on PH:

"At this point, I asked a skeptical colleague at Williams College, Professor Kenneth Savitsky, to try replicating the PH effect using supraliminal exposures. But I made two critical changes: First, the on-screen directions explicitly instructed the participant to “keep your eyes on the picture as it is flashed—even if it is one of the unpleasant pictures.” Second, participants
were given the option of participating in the study without the negative pictures. (There were no erotic trials in the Williams replication.) Savitsky conducted the experiment as a class exercise in a laboratory course in
experimental social psychology. Serving as the experimenter, he ran himself and the 17 students in the experiment; each student was then instructed to run 4 of his or her friends. This produced a total of 87 participants, 84 of whom experienced the negative trials. Collectively they obtained a
hit rate of 52.5% (t(83) = 1.57, p = .061) on the negative trials. More importantly, the positive correlation between hit rate and Emotional Reactivity was restored: The 32 emotionally reactive participants obtained a hit rate of 56.0%, t(31) = 2.66, p = .006. In particular, the 12 emotionally
reactive men in the sample achieved a very high hit rate of 59.7%, t(11) = 3.02, p = .006. The hit rate on the low-affect trials was at chance."


Does anyone know if this study has been published?
It is extremely telling that you have no problems with Bem trying to shift the onus onto the skeptics, while changing the premises of the experiment with two critical changes.

Hello? That's not replication.
CFLarsen is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 13th December 2006, 11:51 AM   #87
fls
Penultimate Amazing
 
Join Date: Jan 2005
Posts: 10,226
Originally Posted by davidsmith73 View Post
Its the best approach that can be taken at the present time IMO. If there was no effect then the number of experiments with positive results would be at chance level. Although meta-analyses can't be taken as "proof" of anything, I think they do show that the number of positive experiments in certain kinds of experiments is above chance.
How do you know it's above chance (taking bias into consideration as well)? You don't know how many studies weren't included because they were negative that would have been included (because they would have been published) if they were positive.

Quote:
I understand your point Linda. But remember that the 1 in 6/13 random successful experiment would have a p-value of 0.05 in your illustration.

If we stay with the precognitive habituation experiments, Bem's studies had a much more impressive p-value than that. Louie's succesfull experiment less so, but then he had a lower N number.
Thousands of parapsychology experiments have been performed. The ones that get noticed are those that have "significant" p-values. It's not unexpected to come up with "one in a thousand" results out of thousands of studies. That doesn't even take into account the analytic flaws in the papers you referenced (multiple comparisons without adjustment in p-values, for example).

Quote:
Do you think that meta-analyses are suited to resolving this kind of issue?
If they have an honest denominator, and if bias is taken into consideration.

Quote:
Also, you have the added problem that experiments are seldom exact replications. Experimental conditions are changed, which could legitimately affect the outcome of the experiment.

For example, I still don't understand why Louie et al decided to change the image exposure to supraluminal in their followup PH experiment. Experiments in conventional mere exposure effects show that supraliminal exposures reduce the effect and Bem's experiments show the same thing. This could be why they couldn't replicate their own findings, because they changed the conditions.
There are always excuses. Bem came up with "precognitive boredom".

Linda
fls is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 13th December 2006, 11:58 AM   #88
davidsmith73
Graduate Poster
 
Join Date: Jul 2001
Posts: 1,697
Originally Posted by CFLarsen View Post
.....why?

If you accept their word that what they are telling you they are doing is accurate and sincere, why would you not accept their word that their conclusions or methodology are sound?
Because believing that a paper accurately and truthfully represents what actually went on during the experiment is a different issue to accepting the conclusions and methodology are valid. Someone could have accurately and truthfully written a methods and results section that contains methodological flaws and makes unjustified conclusions based on accurate and truthfull data.

Quote:
Once you take people's word for granted, you throw out everything else. You admit that you are a hardcore, blind believer.
I know that fraud happens in science. I just don't believe in cherry picking which experiments are fraudulent. How are we to know which ones are? Independent replication sorts this out I suppose.
davidsmith73 is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 13th December 2006, 12:06 PM   #89
davidsmith73
Graduate Poster
 
Join Date: Jul 2001
Posts: 1,697
Originally Posted by CFLarsen View Post
It is extremely telling that you have no problems with Bem trying to shift the onus onto the skeptics, while changing the premises of the experiment with two critical changes.

Hello? That's not replication.
How is Bem shifting the onus onto the skeptics?

I don't see how the premise of the experiment is changed at all in that replication. He simply left out the erotic pictures and tried to ensure that the participants would not look away from the horrible images. So it was replication of the effect using negative images.
davidsmith73 is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 13th December 2006, 12:06 PM   #90
CFLarsen
Penultimate Amazing
 
Join Date: Aug 2001
Posts: 42,371
Originally Posted by davidsmith73 View Post
Because believing that a paper accurately and truthfully represents what actually went on during the experiment is a different issue to accepting the conclusions and methodology are valid. Someone could have accurately and truthfully written a methods and results section that contains methodological flaws and makes unjustified conclusions based on accurate and truthfull data.
This reveals how willingly gullible you are. No, David, we do not take people's word for granted, no matter what they are saying.

If they say that what they are doing is A-OK, we check. Precisely the same way we check their results and methodology.

You are being wildly inconsistent here. You trust them to do right, because you want them to do right.

Originally Posted by davidsmith73 View Post
I know that fraud happens in science. I just don't believe in cherry picking which experiments are fraudulent. How are we to know which ones are? Independent replication sorts this out I suppose.
Yeah. Then, explain how Bem can possibly suggest what he did. Would you call that "replication"?
CFLarsen is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 13th December 2006, 12:18 PM   #91
Merko
Graduate Poster
 
Merko's Avatar
 
Join Date: Nov 2006
Posts: 1,899
Originally Posted by Shera View Post
How do I find out what the standing is of a University and professional journal is? Specifically Laurentian University in Canada and the Perceptual and Motor Skills Journal.
You can get a good idea of a journal's standing within its field by the Science Citation Index. Unfortunately it's not freely available, but if you're affiliated with some University you can probably get access from there. Or someone else with access (I don't have it unfortunately) could give you the index for this particular journal and some others in the same area.

Of course a good standing within a certain field of research is no absolute guarantee for quality, but the title of this one suggests it's an empirically based field.
Merko is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 13th December 2006, 12:21 PM   #92
davidsmith73
Graduate Poster
 
Join Date: Jul 2001
Posts: 1,697
Originally Posted by fls View Post
How do you know it's above chance (taking bias into consideration as well)? You don't know how many studies weren't included because they were negative that would have been included (because they would have been published) if they were positive.
True. We don't know how many unpublished negative studies there might be. Well, within reason of course. But it is possible to estimate how many negative studies would be needed to nullify an overall positive result of a meta-analysis (I don't know how reliable that estimation process is).

Quote:
Thousands of parapsychology experiments have been performed. The ones that get noticed are those that have "significant" p-values. It's not unexpected to come up with "one in a thousand" results out of thousands of studies.
Isn't this what meta-analysis is supposed to address?

Quote:
That doesn't even take into account the analytic flaws in the papers you referenced (multiple comparisons without adjustment in p-values, for example).
Could you explain this?


Quote:
There are always excuses. Bem came up with "precognitive boredom".
Eh? That term was used to describe the unexpected results of the low-affect trials with more than 8 subliminal exposures.
davidsmith73 is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 13th December 2006, 12:28 PM   #93
Merko
Graduate Poster
 
Merko's Avatar
 
Join Date: Nov 2006
Posts: 1,899
As for the file-drawer effect, I think anyone who has been even slightly involved with any kind of research should understand this effect, but obviously many don't. Personally I've written one small article for a scientific journal (it's about an algorithm), but for this one article about an algorithm that works, how many algorithms have I been working on that either did not work, or worked but produced results that were no better than what has already been published, or that were simply of no general interest? Even I have no idea. Ok, this field may be an extreme, with areas requiring large planned studies on the other extreme. But even in that case, we should only expect that studies that don't show promises of results would be much more likely to drag out over time, and eventually be disbanded for lack of resources. And I wouldn't even call that scientific dishonesty. A responsible researcher should not waste money.

Instead, I think the fault is with the idea that it would be possible to perform some sort of meta-proof by aggregating results from many studies and in this manner somehow enhance their significance. That idea is completely flawed. Either we can define an effect, and a replicatable way to test it, and then it will give these results consistently. Or there is not, and there are multiple, poorly defined effects, with tests that have not been properly examined or that cannot be replicated. Then adding these apples and oranges into one bowl does not produce anything of value.
Merko is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 13th December 2006, 12:30 PM   #94
davidsmith73
Graduate Poster
 
Join Date: Jul 2001
Posts: 1,697
Originally Posted by CFLarsen View Post
This reveals how willingly gullible you are. No, David, we do not take people's word for granted, no matter what they are saying.

If they say that what they are doing is A-OK, we check. Precisely the same way we check their results and methodology.

You're perfectly free to do that. But if you read a paper, how would you check that the author was reporting his methods and results section accurately and truthfully when the experiment has already been performed?
davidsmith73 is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 13th December 2006, 12:34 PM   #95
Merko
Graduate Poster
 
Merko's Avatar
 
Join Date: Nov 2006
Posts: 1,899
Originally Posted by davidsmith73 View Post
But it is possible to estimate how many negative studies would be needed to nullify an overall positive result of a meta-analysis (I don't know how reliable that estimation process is).
It's completely bogus, completely unreliable. To prove any sort of psi effect, we only need *one* study. But it has to be done well. The only way to ensure that it has been done well, is having other scientists replicate the same study, testing for the exact same claimed phenomenon, and trying to find flaws in the method.

The method of meta-analysis is based on the flawed idea that all studies are done properly. They are not. Not just in parapsychology, but in any field. No one cares if ten or a hundred scientific studies would claim to have produced cold fusion. It would not prove cold fusion is feasible. We would demand *one* study that can be examined by other experts, and where we can find no flaw, and which can be repeated with the same results. We should do the same for parapsychology.
Merko is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 13th December 2006, 12:35 PM   #96
davidsmith73
Graduate Poster
 
Join Date: Jul 2001
Posts: 1,697
Originally Posted by Merko View Post
As for the file-drawer effect, I think anyone who has been even slightly involved with any kind of research should understand this effect, but obviously many don't. Personally I've written one small article for a scientific journal (it's about an algorithm), but for this one article about an algorithm that works, how many algorithms have I been working on that either did not work, or worked but produced results that were no better than what has already been published, or that were simply of no general interest? Even I have no idea. Ok, this field may be an extreme, with areas requiring large planned studies on the other extreme. But even in that case, we should only expect that studies that don't show promises of results would be much more likely to drag out over time, and eventually be disbanded for lack of resources. And I wouldn't even call that scientific dishonesty. A responsible researcher should not waste money.

Instead, I think the fault is with the idea that it would be possible to perform some sort of meta-proof by aggregating results from many studies and in this manner somehow enhance their significance. That idea is completely flawed. Either we can define an effect, and a replicatable way to test it, and then it will give these results consistently. Or there is not, and there are multiple, poorly defined effects, with tests that have not been properly examined or that cannot be replicated. Then adding these apples and oranges into one bowl does not produce anything of value.
Good points. But how are researchers in parapsychology supposed to respond when a critic claims that the number of positive experiments are what we would expect by chance? If meta-analyses are not to be used in any way, it seems that parapsychologists have no way to answer such a criticism.
davidsmith73 is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 13th December 2006, 12:36 PM   #97
psp02ls
Scholar
 
Join Date: Mar 2006
Posts: 73
Originally Posted by davidsmith73 View Post
But if you read a paper, how would you check that the author was reporting his methods and results section accurately and truthfully when the experiment has already been performed?
I thought I'd already implied that. You can't. Somebody could literally sit there and make the data up on bits of paper and you wouldn't know. Soal's work stood for a long time. Go and read how he got found out, if you don't know.
psp02ls is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 13th December 2006, 12:39 PM   #98
CFLarsen
Penultimate Amazing
 
Join Date: Aug 2001
Posts: 42,371
Originally Posted by davidsmith73 View Post
You're perfectly free to do that. But if you read a paper, how would you check that the author was reporting his methods and results section accurately and truthfully when the experiment has already been performed?
That's the point, David: We can't know.

So, why are you so eager to believe these people when they say that they are doing right, when you (say that you) doubt them when they report their results?

And please explain how Bem can possibly suggest what he did. Would you call that "replication"?
CFLarsen is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 13th December 2006, 01:16 PM   #99
Merko
Graduate Poster
 
Merko's Avatar
 
Join Date: Nov 2006
Posts: 1,899
Originally Posted by davidsmith73 View Post
But how are researchers in parapsychology supposed to respond when a critic claims that the number of positive experiments are what we would expect by chance? If meta-analyses are not to be used in any way, it seems that parapsychologists have no way to answer such a criticism.
First of all, they should stick their neck out and make a very strong statement that according to their research, some very well defined effect definitely exists, and can be measured through a test, that they have performed, and that could be repeated by other scientists. If they really are so sure, then they need to really put their scientific reputation at risk here. It's not enough to hint that 'further research is merited'.

Now, if someone does this, then we can be sure that other parapsychologists, and interested scientists from other fields - and the JREF too for that matter - will be very interested in examining those claims. When many such studies have been done and they generally confirm the results, then we can consider the findings to be confirmed.

I know of no such claim. Do you? I think we need to be specific here. Discussing the validity of PSI claims in general is like claiming that either gravitation or wormholes or cold fusion may exist.
Merko is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 13th December 2006, 02:10 PM   #100
fls
Penultimate Amazing
 
Join Date: Jan 2005
Posts: 10,226
Originally Posted by davidsmith73 View Post
True. We don't know how many unpublished negative studies there might be. Well, within reason of course. But it is possible to estimate how many negative studies would be needed to nullify an overall positive result of a meta-analysis (I don't know how reliable that estimation process is).
I'm suggesting that the number of necessary negative studies falls well within the realm of "plausible".

Quote:
Isn't this what meta-analysis is supposed to address?
No. Metanalysis is appropriate only in limited situations and this isn't one of them (the current fad for the use and misuse of metanalysis notwithstanding).

Parapsychology research doesn't consist of subjects doing amazing things (like flying or making themselves invisible). It consists of subjects doing what they do normally, but in a different frequency than what you'd expect due to chance. But if you give yourself a thousand opportunities, sooner or later you're going to come up with something unlikely. It's unexpected if I will the lottery, but it's completely expected if somebody wins the lottery. The mistake is in after the fact deciding that the lottery winner is the one with the magical powers.

Quote:
Could you explain this?
Tests of statistical significance are based on a single comparison - a single roll of the dice. If you make more than one comparison, you are effectively giving yourself extra chances to roll an "eleven". How much do you want to bet that in addition to comparing "emotionally reactive men" with "emotionally nonreactive men", they also compared "belief in ESP" with "non-belief in ESP", "prior ESP experiences" with "no prior ESP experiences", etc.? And how much do you want to bet that if those comparisons had been "statistically significant" they would have reported on them as well as everything else? Even if you just go on what they admitted to comparing, it looks like they gave themselves a few dozen chances at rolling "eleven".


Quote:
Eh? That term was used to describe the unexpected results of the low-affect trials with more than 8 subliminal exposures.
Yes, it's a way to dismiss results that would make the overall results average. PEAR likes to do that as well.

Linda
fls is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 13th December 2006, 02:11 PM   #101
davidsmith73
Graduate Poster
 
Join Date: Jul 2001
Posts: 1,697
Originally Posted by CFLarsen View Post
That's the point, David: We can't know.
Then why did you say that you check if you now say you can't?

If you're talking about independent replication then thats fine. But someone who reads any scientific paper in any field can't check, like you claimed before. That's why I have to accept the accuracy and truthfullness of scientific reporting, because otherwise you could talk yourself into rejecting whatever findings you don't like the look of, on the basis that "it could have been made up for all I know".

Quote:
So, why are you so eager to believe these people when they say that they are doing right, when you (say that you) doubt them when they report their results?
Where have I said I "doubted them" ?

Quote:
And please explain how Bem can possibly suggest what he did. Would you call that "replication"?
Yes I would call that a replication. It was an experiment to test the PH hypothesis with the same methods as Bem used. Why isn't it a replication?
davidsmith73 is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 13th December 2006, 02:31 PM   #102
davidsmith73
Graduate Poster
 
Join Date: Jul 2001
Posts: 1,697
Originally Posted by fls View Post
I'm suggesting that the number of necessary negative studies falls well within the realm of "plausible".
Wouldn't this involve doing a meta-analysis on the results?

Quote:
Tests of statistical significance are based on a single comparison - a single roll of the dice. If you make more than one comparison, you are effectively giving yourself extra chances to roll an "eleven". How much do you want to bet that in addition to comparing "emotionally reactive men" with "emotionally nonreactive men", they also compared "belief in ESP" with "non-belief in ESP", "prior ESP experiences" with "no prior ESP experiences", etc.? And how much do you want to bet that if those comparisons had been "statistically significant" they would have reported on them as well as everything else? Even if you just go on what they admitted to comparing, it looks like they gave themselves a few dozen chances at rolling "eleven".

Why does it look like Bem gave himself this opportunity? Its certainly possible but that would make him a bad and dishonest scientist. Is that what you really think?
davidsmith73 is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 13th December 2006, 02:44 PM   #103
Merko
Graduate Poster
 
Merko's Avatar
 
Join Date: Nov 2006
Posts: 1,899
Originally Posted by davidsmith73 View Post
Wouldn't this involve doing a meta-analysis on the results?
You could do a meta-analysis covering a field, by compiling results from different researchers. Such an analysis might state that the experiment conducted by Smith, verified by Yin and later by Shansky, appears to prove the existance of the X effect, while the research by Prabaharan, Kurtz and Young, to name the most comprehensive studies, indicate that the proposed Y effect is not present.

Where it goes all wrong is when you try to add up some mathematical probabilities of different studies all involving probabilities, when you actually have no confirmation that the probabilities claimed in the underlying studies are valid.

Originally Posted by davidsmith73 View Post
Why does it look like Bem gave himself this opportunity? Its certainly possible but that would make him a bad and dishonest scientist. Is that what you really think?
Bad, yes, dishonest, not necessarily. It is clear that many of these types do a study first, and define the criteria for what is a 'hit' afterwards. That's bad science, but not necessarily done in bad faith.
Merko is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 13th December 2006, 02:47 PM   #104
davidsmith73
Graduate Poster
 
Join Date: Jul 2001
Posts: 1,697
Originally Posted by Merko View Post
First of all, they should stick their neck out and make a very strong statement that according to their research, some very well defined effect definitely exists, and can be measured through a test, that they have performed, and that could be repeated by other scientists. If they really are so sure, then they need to really put their scientific reputation at risk here. It's not enough to hint that 'further research is merited'.

I know of no such claim. Do you?
Not yet.

There's 3 independent successful experiments on anomalous anticipatory effects of the nervous system according to the 2006 parapsychology convention abstracts.

http://www.parapsych.org/pa_abstracts_2006.html

All that kicked off with Dean Radin's presentiment experiments I think. Perhaps this, along with precognitve habituation, is parapsychology's best shot so far.
davidsmith73 is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 13th December 2006, 02:54 PM   #105
davidsmith73
Graduate Poster
 
Join Date: Jul 2001
Posts: 1,697
Originally Posted by Merko View Post

Bad, yes, dishonest, not necessarily. It is clear that many of these types do a study first, and define the criteria for what is a 'hit' afterwards. That's bad science, but not necessarily done in bad faith.
I don't believe that a professor of psychology would do that and not realise that its bad science. But remember, there's no reason to assume that Bem actually did that.
davidsmith73 is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 13th December 2006, 03:03 PM   #106
CFLarsen
Penultimate Amazing
 
Join Date: Aug 2001
Posts: 42,371
Originally Posted by davidsmith73 View Post
Then why did you say that you check if you now say you can't?
Again, you are missing the point. We check if we possibly can.

Originally Posted by davidsmith73 View Post
If you're talking about independent replication then thats fine. But someone who reads any scientific paper in any field can't check, like you claimed before. That's why I have to accept the accuracy and truthfullness of scientific reporting, because otherwise you could talk yourself into rejecting whatever findings you don't like the look of, on the basis that "it could have been made up for all I know".
Rubbish. Either you accept the word of people, or you rely on evidence. It doesn't matter if it is when they explain how they did something, or what results they got.

Originally Posted by davidsmith73 View Post
Where have I said I "doubted them" ?
Gee, did I misunderstand you, when you said this?

Originally Posted by davidsmith73 View Post
When I said I have to believe what is written in the journals, I meant that I have to believe that the researchers are being accurate and sincere in describing their research. I don't mean that I automatically accept their conclusions or that I don't question methodology.
If I did, please explain what you meant.

Originally Posted by davidsmith73 View Post
Yes I would call that a replication. It was an experiment to test the PH hypothesis with the same methods as Bem used. Why isn't it a replication?
Are you out of your f**king mind? How can you possibly call it a replication, when Bem himself demands not one, but two "critical" changes to the experimental setup?

That is not a replication. That, David, is a prime example of moving the goal posts.

By insisting that he dictates the conditions of the experiment that should replicate his first experiment, Bem proves himself to be a fraud. There is no way you can justify that Bem demands that skeptics "replicate" his experiments, only that they do it his way, which is fundamentally different from the experiment he did.

You are defending a crook.
CFLarsen is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 13th December 2006, 03:09 PM   #107
psp02ls
Scholar
 
Join Date: Mar 2006
Posts: 73
David.

I don't want to criticise Daryl Bem. He's a nice guy, I've met him and chatted to him about the area.

But here is the voice of experience and learning (things fundamental to science). I have first hand experience of Professor Bem making a mistake with analysis. I gave him some data of mine to examine and he pulled out a number of significant findings. However I went back into my data and noticed some errors in his calculations. Correcting for those errors eliminated the results.

If I had trusted and not checked, the story might be different.

Perhaps journal editors should note which authors are theists and which are atheists. That might eliminate a need to read rubbish, whilst put pressure on people to consider their positions. Where else do we even start?
psp02ls is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 13th December 2006, 03:22 PM   #108
Kaylee
Illuminator
 
Join Date: Feb 2005
Posts: 4,283
Originally Posted by Merko View Post
You can get a good idea of a journal's standing within its field by the Science Citation Index. Unfortunately it's not freely available, but if you're affiliated with some University you can probably get access from there. Or someone else with access (I don't have it unfortunately) could give you the index for this particular journal and some others in the same area.
Thanks Merko! I just looked on the web and one of the research libraries in New York City has it. Next time I'm in midtown during the week, I'll look it up.

Quote:
Of course a good standing within a certain field of research is no absolute guarantee for quality, but the title of this one suggests it's an empirically based field.
One can hope.
Kaylee is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 13th December 2006, 03:33 PM   #109
fls
Penultimate Amazing
 
Join Date: Jan 2005
Posts: 10,226
Originally Posted by davidsmith73 View Post
Wouldn't this involve doing a meta-analysis on the results?
Realistically? No. We've had a look at the only "promising" studies and they barely show anything. The few negative studies that psp02ls mentioned are probably more than enough to negate the "effect".

I know that doesn't eliminate the straw you are grasping at, though. Which goes back to my original point. The research in parapsychology consists of grasping at straws without ever getting ahold of anything. Sooner or later, you'd think it would dawn on them why that is.

Quote:
Why does it look like Bem gave himself this opportunity?
Experience. And it's right there in the article you referenced - how many different ways did he split up the data on his charts?

Quote:
Its certainly possible but that would make him a bad and dishonest scientist. Is that what you really think?
I suspect wishful thinking drives a lot of manipulation. I don't discount the possibility that he is being deliberately dishonest, though. I should add that this issue is not confined to parapsychology research.

Linda

Last edited by fls; 13th December 2006 at 03:36 PM.
fls is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 13th December 2006, 03:55 PM   #110
Merko
Graduate Poster
 
Merko's Avatar
 
Join Date: Nov 2006
Posts: 1,899
Originally Posted by psp02ls View Post
Perhaps journal editors should note which authors are theists and which are atheists.
It wouldn't help. There are so many ways to be illogical, it's in no way limited to theism. And there are lots of theists who somehow manage to keep themselves to stringent scientific principles when they conduct research, too. It seems strange to me how they can do this, but there is plenty of evidence that they can.
Merko is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 13th December 2006, 03:56 PM   #111
psp02ls
Scholar
 
Join Date: Mar 2006
Posts: 73
Originally Posted by davidsmith73 View Post
Not yet.

There's 3 independent successful experiments on anomalous anticipatory effects of the nervous system according to the 2006 parapsychology convention abstracts.

http://www.parapsych.org/pa_abstracts_2006.html

All that kicked off with Dean Radin's presentiment experiments I think. Perhaps this, along with precognitive habituation, is parapsychology's best shot so far.
Almost my entire PhD is that kind of experiment David. Radin's been into time-reversed effects for ages. I thought the same as you once. Turned out I was wrong.
psp02ls is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 13th December 2006, 04:01 PM   #112
Merko
Graduate Poster
 
Merko's Avatar
 
Join Date: Nov 2006
Posts: 1,899
Originally Posted by davidsmith73 View Post
There's 3 independent successful experiments on anomalous anticipatory effects of the nervous system according to the 2006 parapsychology convention abstracts.
Come back when you have a researcher in parapsychology who has a secure position and is willing to risk her reputation on the claim that some precisely stated PSI effect exists and that she has a test for it.

I don't believe in that stuff. I don't dismiss it completely, but if not even the proponents are sure about it enough to take a risk, I'm not interested. There's an infinite set of unlikely claims to explore.


EDIT: I followed your link and found two of the studies I think you're referring to. These two studies are completely different. I cannot see how one of them would add any credibility to the other. I can imagine a wide set of possible flaws to these studies. But the question I'm asking here is: if these people think that these effects really exist, why can't they, after all these years, agree on a standard practice to measure it?

In my opinion, the theories these people are investigating are on par with the theory that there's a pink elephant in the room, whenever we close our eyes. It also disappears if we're using a camera, by the way. But it could still be there!

Last edited by Merko; 13th December 2006 at 04:12 PM.
Merko is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 13th December 2006, 04:06 PM   #113
Rodney
Illuminator
 
Join Date: Aug 2005
Posts: 3,942
Originally Posted by psp02ls View Post
Perhaps journal editors should note which authors are theists and which are atheists. That might eliminate a need to read rubbish, whilst put pressure on people to consider their positions. Where else do we even start?
Are you suggesting that theists cannot be objective about parapsychology, whereas atheists can be? How about agnostics?
Rodney is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 13th December 2006, 04:31 PM   #114
Merko
Graduate Poster
 
Merko's Avatar
 
Join Date: Nov 2006
Posts: 1,899
Originally Posted by Rodney View Post
Are you suggesting that theists cannot be objective about parapsychology, whereas atheists can be? How about agnostics?
I think he's probably just fed up after spending years wading through reports from 'faith-based scientists'. Now, I would say that while theists by necessity must have a disposition to assume that PSI-phenomena exist, an atheist could be disposed either way, or be completely neutral.

But I don't believe in the 'objective scientist' anyway. I think that most scientists have some sort of agenda. They'd spend their entire career trying to prove something that they just decided must be there, even before they had evidence, just a 'hunch'. There's nothing wrong with that, this sort of motivation is probably needed for someone to put in the hard work that is necessary for many discoveries. And I believe that we can make correct conjectures, better than chance, even when we do not have the kind of solid evidence that is required in science. The formalism is required to make something into acceptable science, but that doesn't mean these 'hunches' are always just some kind of woo.

The academic system is fortunately capable of handling this. Even when it is probably true that very few scientists ever admit that they were wrong, there will emerge a general opinion that they were, and they will gain few followers.
Merko is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 13th December 2006, 05:48 PM   #115
Jeff Corey
New York Skeptic
 
Jeff Corey's Avatar
 
Join Date: Aug 2001
Posts: 13,714
Includes an aside to CFLarsen

Originally Posted by Merko View Post
...But I don't believe in the 'objective scientist' anyway. I think that most scientists have some sort of agenda. They'd spend their entire career trying to prove something that they just decided must be there, even before they had evidence, just a 'hunch'...
I don't know where you ran into these scientists, but in my personal experience they are rare. Lucky me.
Aside to CFLarsen.
By the way Claus, there are two kinds of replication. Direct and systematic. Direct is when you follow the original method as precisely as possible to see if you get the same result. Think cold fusion.
The second is systematic replication, when you vary parameters to see how robust the original finding was.
Direct replication is normally the first step.
Jeff Corey is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 13th December 2006, 06:45 PM   #116
Rodney
Illuminator
 
Join Date: Aug 2005
Posts: 3,942
Originally Posted by Merko View Post
Now, I would say that while theists by necessity must have a disposition to assume that PSI-phenomena exist,
So how do you explain theists like Martin Gardner, who are extremely skeptical of PSI-phenomena?

Originally Posted by Merko View Post
an atheist could be disposed either way, or be completely neutral.
How, pray tell (if you'll pardon the expression), could an atheist be disposed to think PSI could exist?
Rodney is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 14th December 2006, 01:24 AM   #117
Ersby
Fortean
 
Join Date: Sep 2001
Posts: 1,881
Originally Posted by Rodney View Post
How, pray tell (if you'll pardon the expression), could an atheist be disposed to think PSI could exist?
Are you suggesting that a belief in psi is only possible if you're a theist? Are you saying that psi must be the work of a higher intelligence and cannot possibly be a natural mechanical process?
__________________
"Once a man admits complete and unshakeable faith in his own integrity, he is in an excellent frame of mind to be approached by con men." David W. Maurer, "The Big Con"
Ersby is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 14th December 2006, 01:43 AM   #118
CFLarsen
Penultimate Amazing
 
Join Date: Aug 2001
Posts: 42,371
Originally Posted by Jeff Corey View Post
By the way Claus, there are two kinds of replication. Direct and systematic. Direct is when you follow the original method as precisely as possible to see if you get the same result. Think cold fusion.
The second is systematic replication, when you vary parameters to see how robust the original finding was.
Direct replication is normally the first step.
Yep, "Heureka!" and "Let's Tear It Apart"

My point is that they are skipping over direct replication and move on to something which, to the uninformed observer (or blind believer), looks like systematic replication. But that's not what it is: They are simply moving the goalposts around, while trying to forget - or ignore - that there is no ball to begin with.
CFLarsen is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 14th December 2006, 04:38 AM   #119
davidsmith73
Graduate Poster
 
Join Date: Jul 2001
Posts: 1,697
Originally Posted by CFLarsen View Post
Again, you are missing the point. We check if we possibly can.
But we've just agreed that there is no way ensure that fraud didn't happen.

Quote:
Rubbish. Either you accept the word of people, or you rely on evidence. It doesn't matter if it is when they explain how they did something, or what results they got.
If the evidence is in the form of a written paper then the same argument applies.

Quote:
Gee, did I misunderstand you, when you said this?

Quote:
When I said I have to believe what is written in the journals, I meant that I have to believe that the researchers are being accurate and sincere in describing their research. I don't mean that I automatically accept their conclusions or that I don't question methodology.
If I did, please explain what you meant.
Yes you did misunderstand me. I thought I made it perfectly clear from that statement. Lets take Louie's first PH paper as an example. I believe that he reported his methods and results sections accurately and truthfully and I can't find anything wrong with his methodology. But we can agree or disagree on the conclusions he made. Louie himself even disagrees with what he wrote in the conclusion back then.

Quote:
Are you out of your f**king mind? How can you possibly call it a replication, when Bem himself demands not one, but two "critical" changes to the experimental setup?
Because the changes that were made did not affect the hypothesis being tested. The indepdendent scientist asked the participants not to look away from the screen and left out the erotic images. So the experiment was a replication of the negative image effect from the supraliminal experiment.

As Jeff says, there are exact replications and there are conceptual replications. Louie was involved in a conceptual replication attempt because he used different types of images. I think the Williams replication mentioned in the Bem article is pretty close to an exact replication of the effect found using negative images.

Quote:
There is no way you can justify that Bem demands that skeptics "replicate" his experiments, only that they do it his way, which is fundamentally different from the experiment he did.
How were the conditions fundamentally different?
davidsmith73 is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 14th December 2006, 05:04 AM   #120
Cuddles
Penultimate Amazing
 
Join Date: Jul 2006
Posts: 18,733
Originally Posted by Merko View Post
But I don't believe in the 'objective scientist' anyway. I think that most scientists have some sort of agenda. They'd spend their entire career trying to prove something that they just decided must be there, even before they had evidence, just a 'hunch'.
Utter rubbish. All scienctists work on things we know exist. If someone studies convection in the Sun, we might not know exactly how it works, or even if there is definately convection and not something else, but we know damn well that the Sun is there and that something is happening. Even scientists looking for hypothetical particles that quite possibly don't exist know that some particles exist and that new ones have been found in the past. Even when they don't find anything, or even prove themselves wrong, they have added a bit of knowledge to the world and can move on to new research in the field.

What makes parapsychology different from all other science is that this can't happen. If a physicist spends decades looking for a new particle only to find it isn't there, they can move on to study different particles. If a parapsychologist spends decades looking for psi and finds it isn't there, they are out of a job and have no relevant qualifications or research to find a new one. With all other sciences, theories might be right or wrong, but the field will always be there. With parapsychology, if the theories are wrong then there is no field.
Cuddles is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Reply

International Skeptics Forum » General Topics » General Skepticism and The Paranormal

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump


All times are GMT -7. The time now is 04:58 PM.
Powered by vBulletin. Copyright ©2000 - 2023, Jelsoft Enterprises Ltd.

This forum began as part of the James Randi Education Foundation (JREF). However, the forum now exists as
an independent entity with no affiliation with or endorsement by the JREF, including the section in reference to "JREF" topics.

Disclaimer: Messages posted in the Forum are solely the opinion of their authors.