I found these forums when a search for non-vendor info on the Psyleron REG-1 pointed to a 2008 thread about same. Reading that thread prompted a thought experiment (er..) I'd like to pose to the group.
The PEAR research *claims* to have found good evidence that the human mind can affect genuinely random bit streams generated by physical processes (like reverse biased zener diode noise), but not deterministic pseudo-random number generators. However, they claim the effect is in the order of one part in then thousand. So for example, when the subject is trying to generate more 1's than 0's (or vice versa), there would be around 100001 ones to 10000 zeros (or vice versa), which did not happen when the subject was not trying to influence the bitstream. (They made other claims too, but the thought experiment is only about this limited aspect).
Some of the response here was to assume that they had probably cherry picked the data or used naive techniques. My general scepticism about "extraordinary claims" would make me suspect that is more likely than not to be the case (but that's more of a hunch than a fact and I don't have a set-in-concrete position). However it brings to mind an interesting question of "could it be done right and how?".
Obviously to sort out this effect, you would have to do a good number of experiments, to pull such a weak signal out of the noise. Does that mean that it's inherently beyond any possibility of scientific experimentation? I think not, but I'd like some thought from scientists and statisticians here on it.
How would YOU design an experiment to objectively test this as a hypothesis? I expect that one might, for example, record timestamped bitstreams along with timestamped periods in which the subject was instructed to favors 0's, favor 1's, or not try for any influence. This could be done blind in the sense that the experimenter need not themselves know which of the three modes a subject is being asked to take (eg: one/zero/neither periods could be shown on a screen visible only to the subject). One could also switch between true random and pseudo-random bitstreams without knowledge of the experimenter or subject, or even the statistician doing the initial analysis (while recording the switch for later validation).
Note that we don't need to know that the random number generator is "perfect" (the very definition of which is a difficult question). If there were a statistically significant excess of ones when the subject is so instructed, and zeros when the subject is so instructed, using the same hardware, that is in itself an interesting finding. (In fact, one could argue that if the bitstream is influenceable by the human mind, then it's obviously not perfectly random; so the question is more about whether a non-deterministic bitstream is influenceable).
Like many physics experiments, I suspect the results would be a bracketing: either "we find an effect in the range of 1 in X bits +/- error" or, "the effect could be no larger than 1 in Y bits (or we would have seen it)". That is, you could not per se "disprove" the effect, but you COULD constrain its maximum magnitude, not unlike many CERN experiments.
For the statistically well informed: what specific statistical methods would you use to correlate a stream of 1's and 0's with a slower recording of "intention" periods? How many tests would it take? How would you define your p for "showing an effect" and for "constraining the maximum magnitude of an effect not found"?
After answering the simpler question, let's make the thought experiment a bit harder. Suppose the effect is stipulated to potentially work better for some people, or on some days than on others, just as some people play golf better on some days than others - that is, as a normal aspect of varying human performance rather than as a deliberate scam. An acknowledged real ability at the limits of perception or cognition might have such variability (something which comes to mind from conventional science is a recent experiment in which soccer experts predicted the score of a game from limited information and were rated as doing better or worse than random chance under variations in attention and distraction). You would of course have all the data, for good and bad days, and would need to find a way to analyze it fairly without biased cherry picking. Experimenters in other subjective psychological areas have dealt with variable performance, but maybe their primary "signal to noise" ratio was higher than 1 in 10^4.
Would this combination (statistically small effect AND potentially varying performance) make such a hypothesized effect theoretically or practically impossible to investigate under any circumstance, rendering it inherently outside the envelope and authority of science? Or could an experimental regime be designed which would eventually either demonstrate an effect (with increasing confidence in the error bounds), or constrain the upper limits of such an effect (with decreasing limits)?
Zeph
The PEAR research *claims* to have found good evidence that the human mind can affect genuinely random bit streams generated by physical processes (like reverse biased zener diode noise), but not deterministic pseudo-random number generators. However, they claim the effect is in the order of one part in then thousand. So for example, when the subject is trying to generate more 1's than 0's (or vice versa), there would be around 100001 ones to 10000 zeros (or vice versa), which did not happen when the subject was not trying to influence the bitstream. (They made other claims too, but the thought experiment is only about this limited aspect).
Some of the response here was to assume that they had probably cherry picked the data or used naive techniques. My general scepticism about "extraordinary claims" would make me suspect that is more likely than not to be the case (but that's more of a hunch than a fact and I don't have a set-in-concrete position). However it brings to mind an interesting question of "could it be done right and how?".
Obviously to sort out this effect, you would have to do a good number of experiments, to pull such a weak signal out of the noise. Does that mean that it's inherently beyond any possibility of scientific experimentation? I think not, but I'd like some thought from scientists and statisticians here on it.
How would YOU design an experiment to objectively test this as a hypothesis? I expect that one might, for example, record timestamped bitstreams along with timestamped periods in which the subject was instructed to favors 0's, favor 1's, or not try for any influence. This could be done blind in the sense that the experimenter need not themselves know which of the three modes a subject is being asked to take (eg: one/zero/neither periods could be shown on a screen visible only to the subject). One could also switch between true random and pseudo-random bitstreams without knowledge of the experimenter or subject, or even the statistician doing the initial analysis (while recording the switch for later validation).
Note that we don't need to know that the random number generator is "perfect" (the very definition of which is a difficult question). If there were a statistically significant excess of ones when the subject is so instructed, and zeros when the subject is so instructed, using the same hardware, that is in itself an interesting finding. (In fact, one could argue that if the bitstream is influenceable by the human mind, then it's obviously not perfectly random; so the question is more about whether a non-deterministic bitstream is influenceable).
Like many physics experiments, I suspect the results would be a bracketing: either "we find an effect in the range of 1 in X bits +/- error" or, "the effect could be no larger than 1 in Y bits (or we would have seen it)". That is, you could not per se "disprove" the effect, but you COULD constrain its maximum magnitude, not unlike many CERN experiments.
For the statistically well informed: what specific statistical methods would you use to correlate a stream of 1's and 0's with a slower recording of "intention" periods? How many tests would it take? How would you define your p for "showing an effect" and for "constraining the maximum magnitude of an effect not found"?
After answering the simpler question, let's make the thought experiment a bit harder. Suppose the effect is stipulated to potentially work better for some people, or on some days than on others, just as some people play golf better on some days than others - that is, as a normal aspect of varying human performance rather than as a deliberate scam. An acknowledged real ability at the limits of perception or cognition might have such variability (something which comes to mind from conventional science is a recent experiment in which soccer experts predicted the score of a game from limited information and were rated as doing better or worse than random chance under variations in attention and distraction). You would of course have all the data, for good and bad days, and would need to find a way to analyze it fairly without biased cherry picking. Experimenters in other subjective psychological areas have dealt with variable performance, but maybe their primary "signal to noise" ratio was higher than 1 in 10^4.
Would this combination (statistically small effect AND potentially varying performance) make such a hypothesized effect theoretically or practically impossible to investigate under any circumstance, rendering it inherently outside the envelope and authority of science? Or could an experimental regime be designed which would eventually either demonstrate an effect (with increasing confidence in the error bounds), or constrain the upper limits of such an effect (with decreasing limits)?
Zeph