ISF Logo   IS Forum
Forum Index Register Members List Events Mark Forums Read Help

Go Back   International Skeptics Forum » General Topics » General Skepticism and The Paranormal
 


Welcome to the International Skeptics Forum, where we discuss skepticism, critical thinking, the paranormal and science in a friendly but lively way. You are currently viewing the forum as a guest, which means you are missing out on discussing matters that are of interest to you. Please consider registering so you can gain full use of the forum features and interact with other Members. Registration is simple, fast and free! Click here to register today.
Tags statistical analysis , statistical methods , telekinesis

Reply
Old 30th August 2018, 10:03 AM   #281
JayUtah
Penultimate Amazing
 
JayUtah's Avatar
 
Join Date: Sep 2011
Posts: 16,005
Originally Posted by Crossbow View Post
Sorry 'Buddah'!

I miss-read your posting and as such, I made a stupid mistake about it.

My apologies to you.
Buddha, this is an example of how to accept and respond to criticism in a civil debate. Please follow Crossbow's example when your errors are pointed out to you.
JayUtah is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 30th August 2018, 10:17 AM   #282
P.J. Denyer
Illuminator
 
Join Date: Aug 2008
Posts: 4,323
Originally Posted by JayUtah View Post
Buddha, this is an example of how to accept and respond to criticism in a civil debate. Please follow Crossbow's example when your errors are pointed out to you.
Some chance! This is the guy who thinks he knows better than every accepted theologian, biologist, physicist etc in history, why would he admit to error?

You cast pearls before swine JayUtah, but please accept a short but heartfelt round of applause from the cheap seats for your patience and dedication to reality.
__________________
"I know my brain cannot tell me what to think." - Scorpion
P.J. Denyer is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 30th August 2018, 11:02 AM   #283
The Sparrow
Graduate Poster
 
The Sparrow's Avatar
 
Join Date: Sep 2015
Location: Central Canada
Posts: 1,505
Originally Posted by P.J. Denyer View Post
...
You cast pearls before swine JayUtah, but please accept a short but heartfelt round of applause from the cheap seats for your patience and dedication to reality.
Yes, seconded.
I read these!
The Sparrow is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 30th August 2018, 11:04 AM   #284
JayUtah
Penultimate Amazing
 
JayUtah's Avatar
 
Join Date: Sep 2011
Posts: 16,005
Originally Posted by Mojo View Post
It is becoming increasingly apparent that this is because he either fails to understand it, or simply ignores it.
There's no question he doesn't understand it. Whether we're talking about control systems, basic descriptive statistics, or the methodologies of psychological experimentation, he stumbles over foundational concepts.

The broad phenomenon in fringe argumentation is that claimants will frequently try to dumb down the problem to fit the understanding they already have. This can lead to easily-discerned oversimplifications. But it can also lead to baffling and amusing exercises of the form, "Well, X is just like Y and I know Y, so because of these reasons pertaining to Y, I can make the following claims regarding X." Buddha is desperately trying to make psychology research look like one of the things he knows about -- or more accurately, one of the things he thinks he can bluff about without detection.

I brought up some of the ways experimenters employ methods in descriptive statistics to control for factors from the "messy" real-world way human subjects are obtained and the "messy" real-world ways things about them are measured. Buddha doesn't understand them, can't refute them, and so therefore they are "irrelevant." They don't fit what he already knows about the subject, so he changes the real world to fit his preconception. This is what he did with Dr. Jeffers, who addressed the suspicious inter-subject phenomena in the baselines. In order to pick apart Jeffers' analysis one has to know about the underlying statistics and how experimental scientists use them to confirm the integrity of their data. Again, Buddha doesn't understand them, can't refute them, and so pretends for a while that the Jeffers analysis doesn't exist before finally declaring it too to be "irrelevant."

As I mentioned above, Buddha is trying to foist the notion that psychology research and control-systems design must follow the same rules. That's not even a tenuous connection. But it serves his purpose by changing the subject to one he may feel more comfortable discussing. As we've seen several times, Buddha recovers from error by pontificating on some irrelevant subject, ostensibly to assure us readers that he really is as intelligent and well-informed as he needs the world to believe he is. When he is compelled to discuss psychology research, he can't seem to escape his own wild fantasies -- empirical control must be some kind of machine, impossible to detect deception except with some kind of truth serum, etc. If he can't conceive of how it's done, then it must be impossible. Fitting the problem to his knowledge rather than expanding his knowledge to accommodate the problem.

And then incidentally, we see how he misrepresented control systems. If we argue that control systems have nothing to do with psychology research and the subsequent statistical analysis, then by rights we should ignore what we say is irrelevant. But like a good lawyer would do when writing a brief, I've offered a few lines of reasoning to apply in the alternative. If, hypothetically, one wants to argue that psychology is indeed similar to control systems, then one still has to get the control-system part of the argument right.

Buddha doesn't. Or rather, he misrepresents the field to describe only those examples that contradict how he believes Dr. Palmer is looking at the PEAR data. I provided other examples and described how they work as congruently to Palmer as can be expected from such an inapt comparison. At best it shows the depth of Buddha's errors in his defense of PEAR. There is very little of anything he talks about on any subject that he manages to get right. But what I find most amusing is the tortured saving of face he attempts when confronted with information from his own professed field that contradicts him. He doesn't even discuss whether the information is right, or how it affects his argument. He turns immediately to casting aspersions on the person who provided it, insinuating that I'm not qualified to have that knowledge and that it must have been such an arduous chore for me to put it together. He isn't concerned at all about the argument; he only cares that his status as Alpha Brain remains intact, approachable only by extreme effort from his critics.

That's what cements this and all the other threads he's started here as a fairly predictable exercise in ego reinforcement.

Last edited by JayUtah; 30th August 2018 at 11:06 AM.
JayUtah is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 30th August 2018, 12:00 PM   #285
theprestige
Penultimate Amazing
 
Join Date: Aug 2007
Posts: 29,045
Originally Posted by JayUtah View Post

The broad phenomenon in fringe argumentation is that claimants will frequently try to dumb down the problem to fit the understanding they already have. This can lead to easily-discerned oversimplifications. But it can also lead to baffling and amusing exercises of the form, "Well, X is just like Y and I know Y, so because of these reasons pertaining to Y, I can make the following claims regarding X."
Is it just me, or does this approach invariably demonstrate that the claimant doesn't even know about Y?
theprestige is online now   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 30th August 2018, 12:43 PM   #286
JayUtah
Penultimate Amazing
 
JayUtah's Avatar
 
Join Date: Sep 2011
Posts: 16,005
Originally Posted by theprestige View Post
Is it just me, or does this approach invariably demonstrate that the claimant doesn't even know about Y?
Not invariably, but often. Certainly in this case.
JayUtah is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 31st August 2018, 08:20 AM   #287
Buddha
Thinker
 
Join Date: Jun 2018
Location: New York City
Posts: 210
“A more uniform distribution of scoring across subjects is suggested by
m analyses using the subject as the unit. A mean run score on the
experimental runs was computed for each subject by reversing the direction
of the PK- scores and taking the average of the PK+ and PK- scores, weighted
by the number of runs in each condition. The mean of these scores was
100.03 which is significantly above chance, although barely so (t[211-1.74,
T<.05, one-tailed). However, when the experimental scores are contrasted to
the baseline scores using a dependent t-test, the result falls just short of
significance (t[21]-1.67).”Palmer, page 119

This is not a baseline, contrary of what Palmer thinks; he doesn’t have a clear idea of what the baseline is. A baseline is determined before the start of a project, not during it. For example, see https://ec.europa.eu/eurostat/statis...Baseline_study

When it is possible, the scientists use theoretical considerations to establish a baseline; there is a good reason for that, I will explain it later. When it is not possible, a baseline is based on the data available before beginning of a project

Fluctuations of electrons from the surface of a metal form a Poisson distribution, as the theory shows. In Princeton study the baseline is 0.5 (actually, this is a Bernoulli trials process with the limiting case being Poisson distribution).

Except for the electron emission part of the equipment, it is possible that other equipment parts introduce the bias, which may result in non-Poisson process. To rule out this possibility, the researchers run the device without the subjects being tested, collect the results and use certain statistical methods to determine if results form a Poisson distribution.

Let’s say the results do not form a Poisson distribution. In this case the team follow well-known guidelines: 1. They check if the equipment is assembled according to the manufacturer’s instructions; 2. They make sure that there are no unwanted feedback loops between the equipment and external devices (in this case the external device is the recorder). 3. They make sure that the pressure, temperature, electric current, etc., are within the allowable limits. 4. They shield the equipment from external electromagnetic fields, solar radiation, etc.

If a theory is correct, these measures guarantee that the scientists are dealing with a Poisson process.

When it is possible, the scientists do not base the baseline on empirical data for several reasons: 1. If there are no theoretical considerations, scientists would not know what kind of process they are dealing with, which makes an establishment of the baseline extremely difficult. I think I should elaborate a bit on this topic. Let’s say that results of an experiment do not form Poisson distribution. There are non-Poisson processes as well with their own rules of choosing a baseline. Without knowing which one of them is present in a particular case, you won’t be able to choose a baseline. Of course, you have a choice to use all available fitness tests to identify a distribution. But they all could come up empty. At another extreme, your empirical data might fit more than one distribution, which would make the choice impossible.

2 It would require an infinite number of runs to determine the nature of a process. Take, for example, the coin-toss process. The percentage of tails fluctuates around 50% but it is very seldom exactly 50%. You have to postulate that, if the coin is unbiased, the probability of ether outcome is 0.5. You take certain precautions to make sure that the coin is not biased, but this is the best you could do.

This post is not finished, but I have to go back to work. I’ll be back on Tuesday. Happy Labor Day!
Buddha is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 31st August 2018, 12:15 PM   #288
JayUtah
Penultimate Amazing
 
JayUtah's Avatar
 
Join Date: Sep 2011
Posts: 16,005

tl;dr -- Buddha doesn't understand the statistics PEAR used to analyze their findings. He has conflated the incidental means of validating the baseline with the actual tests for significance against the baseline. Based on that misunderstanding, he accuses Palmer of erring when he recomputed the significance. I explain how the t-test for significance works and provide an example to illustrate what Dr. Jeffers found suspicious about PEAR's use of it.
Originally Posted by Buddha View Post
This is not a baseline, contrary of what Palmer thinks...
Yes it is, contrary to what you think.

Quote:
[H]e doesn’t have a clear idea of what the baseline is.
Yes he does, in the context of PEAR's research which intended -- correctly so -- to use the t-test for significance. Dr. Palmer explicitly notes that PEAR's decision to use the t-test instead of the Z-test improves over Jahn's predecessor Schmidt in studying the PK effect. I've mentioned this several times, but you never commented on it. I'm going to explain why it's an improvement, why PEAR was right to use it, why Palmer was right to endorse it's use, and why you don't know what you're talking about.

Quote:
A baseline is determined before the start of a project, not during it.
No.

If your intent is to use the t-test for significance, then baseline data must be collected empirically. It can't be inferred from theory. It can only be done after the project design, apparatus, and protocols are in place. Now where the baseline calibration factors are all static given the above, all the empirical baseline data may be collected prior to any experimental trials -- or even afterwards, as long as the baseline collection is independent of experiment trials. But if instead the calibration factors include environmental factors that cannot be controlled for except at the moment of trial, then it would be a mistake to compare experimental data collected in one environment at the beginning of the project to baseline data collected in a different environment as the project proceeds.

It is up to the judgment of the experimenter to know which factors apply. In this case Dr. Jahn, an experienced engineer, properly understood that the REG apparatus was sensitive to several environmental factors, only some of which he could control for explicitly. Hence the protocol properly required calibration runs at the time of trial. This is so that the collected data sets would be reasonably assured to be independent in only one variable.

Quote:
When it is not possible, a baseline is based on the data available before beginning of a project.
No.

There is no magical rule that says that all baseline data must be collected prior to any experimental data, and absolutely no rule that says calibration runs may not interleave with experimental runs. You're imagining rules for the experimental sciences that simply aren't true. We know from your prior threads that you have no expertise or experience in the experimental sciences, so you are not a very good authority on how experiments are actually carried out. We further know that you will pretend to have expertise you don't have, and that your prior arguments depart from "rules" you invent from that pretended expertise and then try to hold the real world to.

Quote:
Fluctuations of electrons from the surface of a metal form a Poisson distribution, as the theory shows.
Yes, in theory. The REG design is based on a physical phenomenon known to be governed principally by a Poisson distribution. That does not mean the underlying expectation transfers unaffected through the apparatus from theoretical basis to observable outcome. In the ideal configuration of the apparatus, and under ideal conditions, the outcome is intended to conform to a Poisson distribution to an acceptable amount of error.

Quote:
Except for the electron emission part of the equipment, it is possible that other equipment parts introduce the bias...
Not just possible, known to confound. We'll come back to this.

Quote:
To rule out this possibility, the researchers run the device without the subjects being tested, collect the results and use certain statistical methods to determine if results form a Poisson distribution.
The results will never "form" a Poisson distribution. The results will only ever approximate a Poisson distribution to within a certain error.

Quote:
Let’s say the results do not form a Poisson distribution.
Specifically, if the machine is operating properly a Z-test for significance applied to the calibration run will produce a p-value less than 0.05. If the p-value is higher, it means some confound in the REG is producing a statistically significant effect.

But you misunderstand why this is of concern. You wrongly think it's because it is the goal of the experimenters to compare the experimental results to the Poisson distribution. Instead, an errant result in an apparatus carefully designed and adjusted to approximate as close as possible a Poisson distribution indicates an apparatus that is clearly out of order. This in turn indicates an unplanned condition within the experiment that cannot for that reason be assumed later not to have confounded in an unknown qualitative way with the experimental data. The Z-test for conformance to the Poisson distribution merely confirms that the machine is working as intended, not that the machine is working so well that the Poisson distribution can be substituted as a suitable baseline.

Quote:
In this case the team follow well-known guidelines...
Well-known, but known not to be exhaustive. This is the part you're missing. Yes, the operator of an REG (or any other apparatus) will have a predetermined checklist to regulate the known confounds with the hope of reducing measured calibration error to below significance. That doesn't guarantee he will succeed in removing all error such that he can set aside measurement in favor of theory.

Quote:
If a theory is correct, these measures guarantee that the scientists are dealing with a Poisson process.
No.

Certainly a conscientious team will look for sources of error. But they know they cannot exhaustively do so, and that they will never reduce the Z-test p-value to zero. Nor is it possible to. They will only reduce the results of the calibration Z-test to a p-value that is acceptably small for their purposes. "Acceptably small" does not mean zero. It merely means they have confidence that the machine is working as expected. They know from the start that they are dealing with a Poisson process. The calibration merely ensures that the Poisson effect dominates the machine's operation.

If they were to use the Z-test to compare the experimental data to the idealized Poisson distribution, the error remaining in the calibration Z-test would still be a factor. And it would have been set aside in that method.

Let's say the calibration runs produce a p-value in the Z-test of p < 0.045. That's certainly enough to ensure that the machine is operating within tolerance. But the error still exists as a non-zero quantity. Using the Z-test combines that error with any variance in the experimental results such that they cannot be separated. When the expected variance in your experimental results is very small, this becomes a concern.

Hence the t-test for significance, which relaxes the constraint that the expected data conform to any theoretical formulation of central tendency. The data may incidentally conform, but that's not a factor in the significance test.

Quote:
There are non-Poisson processes as well with their own rules of choosing a baseline.
Yes, which is why the other tests for significance besides the Z-test exist. The rules for choosing a baseline in the t-test are that the baseline is determined to an acceptable degrees-of-freedom extent by some number of empirical runs where the test variable is not varied. The protocol for running the calibration is determined by known factors of the test, and looks at how the expected or known confounds are thought to vary. Jahn et al. expected the confounds to vary mostly by factors that would exhibit themselves only at the time of trial, and could only be partially controlled for by machine adjustment. Hence calibration runs interleaved with trial runs.

"Better than chance" in these contexts doesn't mean varying from the Poisson distribution. It means varying from the behavior that would be expected were not some influence applied. Whatever that behavior would have been is completely up for grabs. You don't have to be able to fit it to a classic distribution. You only have to be able to measure it reliably.

Quote:
Without knowing which one of them is present in a particular case, you won’t be able to choose a baseline.
Which is why a test was developed that determines a baseline empirically, without the presumption that it would conform to some theoretical distribution. If you aren't sure which theoretical distribution is supposed to fit, or you know that no theoretical distribution will fit because of the nature of the process, then descriptive statistics provides a method for determining whether some variable in the process has a significant effect by comparing it against an empirically-determined baseline. The limitations of baselines determined empirically translates to degrees-of-freedom in the comparison, but does not invalidate it entirely. This is Descriptive Stats 101, Buddha. The fact that you can't grasp this simple, well-known fact in the field says volumes about your pretense to expertise.

Quote:
At another extreme, your empirical data might fit more than one distribution, which would make the choice impossible.
The t-test requires no choice -- it always uses the t-distribution. You fundamentally don't understand what it is, why it's used, or how it achieves its results.

Quote:
It would require an infinite number of runs to determine the nature of a process.
No.

This is comically naive, Buddha. You're basically arguing that the t-test itself is invalid, when it is actually one of the best-known standard measurements of significance.

No, the t-test does not require "inifinite number of runs" to establish a usable baseline. The confidence in the baseline is determined by the distribution of means in the calibration runs. The standard deviation of that metric determines the degrees of freedom, which is the major parameter to the t-distribution. The degrees-of-freedom flexibility in the t-distribution is meant to compensate for uncertainty in the standard deviation in the distribution of means in the calibration runs.

You don't compare the calibration runs to some idealized distribution. You compare them to each other. The central tendency of that distribution of means measures the consistency of the calibration runs from trial to trial. If the calibration runs are very consistent, only a few of them are needed. If they are not consistent -- i.e., the standard deviation of the distribution of means is large -- then many more runs will be required to establish a true central tendency.

But once you know the degrees of freedom that govern how much the t-distribution can morph to accommodate a different distribution, you know whether you have a suitably tight baseline.

Let's say you do twenty calibration runs, and for all of them the Z-test against the Poisson distribution produces a p-value in the range (0.044-0.046). That's approaching significance, but the p<0.05 threshold may be sacrosanct in your field. So you're good to go. Your confounds are just below the level of significance when compared to Poisson.

But instead we might find that the distribution of means in the calibration runs is extremely narrow. That is, the machine might be on the hairy edge of accurately approximating the Poisson distribution, but it could be very solidly within the realm of repeating its performance accurately every time. This is why the t-test is suitable for small data sets (in Jahn's case, N=23) where such behavior might be revealed in only a small number of calibration runs.

A small standard deviation in the baseline means translates to fewer degrees of freedom in the ability of the baseline to "stretch" to accommodate values in the comparison distribution of means. That means any data that stands too far outside the properly-parameterized t-distribution will be seen as significantly variant. That is, it is the consistency among the baseline runs, not their conformance to one of the other classic distributions, that makes the comparison work.

But what's more important is that any concern in the p-values of the Z-test on the calibration is irrelevant. Whatever was causing the machine to only-just-barely produce suitably random numbers was shown in the t-test baseline computation not to vary a whole lot from run to run. Whatever the confounds are, they're well-behaved and can be confidently counted on not to suddenly become a spurious independent variable. If the subject then comes in and produces a trial that varies at p < 0.05 in the t-test from the t-distribution parameterized from those very-consistent prior runs, that's statistically significant. If that subject's performance had been measured instead according to the Poisson distribution, then the effect hoped to be statistically significant would still be confounded with whatever lingering effect was causing the p < 0.046 etc. values in the calibration.

In your rush to play teacher, you've really shot yourself in the foot today.

First, I covered all this previously. It was in one of those lengthy posts you quoted and added to it a single line of dismissive rebuttal. You constantly attempt to handwave away my posts as "irrelevant" or somehow misinformed, but here you are again trying to say what I've already said as if you're now the one teaching the class. The way the Poisoning-the-Well argument technique works is that you're not supposed to drink from the same well. I explained how the t-test and its parameters worked, but now it's suddenly relevant when you decide to do it...

...and get it wrong. That's our second point. You fundamentally don't understand how tests for significance work. It's clear you've ever only worked with the basic, classic distributions and -- in your particular mode -- think that's all there could ever be. As I wrote yesterday, you're trying to make the problem fit your limited understanding instead of expanding your understanding to fit the problem. And in your typically arrogant way, you have assumed that your little knowledge of the problem, gleaned from wherever, "must" be correct, and that someone with a demonstrably better understanding of the subject than you -- the eminent psi researcher John Palmer -- "must" have conceived the problem wrong.

These are questioned intended entirely seriously: Do you ever consider that there are things about some subject you do not know? Do you ever consider that others may have a better grasp of the subject than you? Have you ever admitted a consequential error?

Third, now it's abundantly clear why you're so terrified to address Dr. Steven Jeffers. Your ignorance of how the t-test for significance works and achieves its results reveals that you don't have the faintest clue what Jeffers actually did. You're ignoring him because you don't have any idea how to even begin. It's so far over your head.

So the t-test for significance compares two data sets that are categorically independent according to some variable of interest (in PEAR's case, whether PK influence was consciously applied). All the potential confounds are expected to be homogeneous across the two sets. One data set is the calibration runs, represented by its mean and standard deviation. The other set is the experimental runs, similarly represented. The N-value (23, for PEAR) and the standard deviation in one distribution determine the degrees of freedom that the corresponding t-distribution can use to "stretch" or "bend" to accommodate the other distribution.

What Jeffers discovered was that PEAR's t-distribution for the calibration runs was too tightly constrained. Working backwards, this translates into not enough degrees of freedom, then into not a lot of variance in the calibration means for a sample size of 23. In fact, an absurdly small amount of variance. Too small to be possible from PEAR's protocol. Why? Because while the process underlying the the REG operation is theoretically Poisson, the process variable gets discretized along the way. Discretizing a variable changes the amount by which it can vary, and consequently the ways in which statistical descriptions of such variance can appear.

Let's say you ask 10 people to name a number between 1 and 10. We take the mean. Can that mean have a value of 3.14? No. Why not? Because our divisor is 10, and can never produce more than one digit past the decimal. It could be 3.1 or 3.2, but not 3.14. Do that 20 times, for a total of 20 means computed from groups of ten. If we aggregate the means, they can't vary from group to group by anything finer than 0.1. Data points will be either coincident or some multiple of 0.1 apart. If we look at the distribution of those means, there is a limit to how closely they can approximate a classic distribution because they are constrained by where they can fall in the histogram. They can fall only on 0.1-unit boundaries, regardless of how close or far away from the idealized distribution that is. All our descriptive statistics are hobbled in this case by the coarse discretization of the data.

All that occurs because the customary response to "pick a number between 1 and 10" is an integer. If we re-run the test and let people pick decimal numbers to arbitrary precision, then the group means can take on any real value, the aggregate of means can take on any value, and the distribution of those means across all groups has more flexibility to get close to a classical distribution. More importantly, the standard devision of that distribution has more places to go.

What Jeffers found was that the purported distribution of means in the calibration runs is not likely to have actually been produced by the REGs because it offered a standard deviation not achievable through the discrete outputs the REG offered, just like there exists no set of integers such that their sum divided by 10 can be 3.14.

I would like you to address Jeffers, the critic of PEAR you've been avoiding for weeks. I would like to see you demonstrate enough correct knowledge of the t-test for significance to be able to discuss his results intelligently, and at the same time realize that John Palmer is not misinformed as you claim. At this point you seriously don't know what you're talking about.
JayUtah is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 1st September 2018, 01:19 PM   #289
aleCcowaN
imperfecto del subjuntivo
 
aleCcowaN's Avatar
 
Join Date: Jul 2009
Location: stranded at Buenos Aires, a city that, like NYC or Paris, has so little to offer...
Posts: 9,252
Originally Posted by Crossbow View Post
After all, if telekinesis were real, then there would be people making millions of dollars per year simply by going to casinos and using their powers to rig games like roulette and craps to their benefit. Or these people would periodically win big on Powerball Games and other such things which involve random chance objects.
Didn't you know that they are hired by the casinos to make you lose money? Additionally winning big in Poweball Games and the like is the way the illuminati amassed their fortune

Summary: telekinesis is as real as the illuminati.
__________________
Horrible dipsomaniacs and other addicts, be gone and get treated, or covfefe your soul!These fora are full of scientists and specialists. Most of them turn back to pumpkins the second they log out.
I got tired of the actual schizophrenics that are taking hold part of the forum and decided to do something about it.
aleCcowaN is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 1st September 2018, 01:58 PM   #290
aleCcowaN
imperfecto del subjuntivo
 
aleCcowaN's Avatar
 
Join Date: Jul 2009
Location: stranded at Buenos Aires, a city that, like NYC or Paris, has so little to offer...
Posts: 9,252
Jeffers' !!! Jeffers' !!!

Originally Posted by Buddha View Post
For what ever reason you do not understand some, although not all, my responses. As for being repetitive, I agree with you. I try to respond to some posts that I find interesting, although they may contain similar data.
You forgot to mention "his replies have little to do with the posts his quoting", which is exactly what you're doing here

Originally Posted by Buddha View Post
You might be a retiree , but I am not, so I do not have time for everything I would like to do. Yes, I know, I have repeated some stud that I wrote before, but I find your post interesting so I respond to it on a personal basis.
Oh, you have the clinical eye!

What ever made you say that? Everyone knows I'm here to learn English (not from you, off course). But whatever my reasons are, everyone knows I'm very busy, even when I'm here. That's why I'm barely reading your posts now. Reading JayUtah's and others' here give me all I expect (and it's worth).

"Buddha", I'm looking forward to your next zero-content post so I will be able to enjoy their replies and learn.

[you should have read it for real before even responding to it .... by the way: Jeffers' !!! Jeffers' !!! -the constant reminder that you have definitively lost this debate: any person with a forehead taller than one inch will read Jeffers' and will find that, at the date of the post I'm replying to, your avoidance of it in this thread, not to mention your puerile attempts to claim nobody had provided that link, defines you as an hedonistic believer and not the "thinker" you pretend to be-]
__________________
Horrible dipsomaniacs and other addicts, be gone and get treated, or covfefe your soul!These fora are full of scientists and specialists. Most of them turn back to pumpkins the second they log out.
I got tired of the actual schizophrenics that are taking hold part of the forum and decided to do something about it.
aleCcowaN is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 1st September 2018, 03:34 PM   #291
JayUtah
Penultimate Amazing
 
JayUtah's Avatar
 
Join Date: Sep 2011
Posts: 16,005
Originally Posted by aleCcowaN View Post
You forgot to mention "his replies have little to do with the posts his quoting", which is exactly what you're doing here
You can see that he's devolved into quoting my entire posts -- especially the long ones that go into great detail -- and simply writing one or two dismissive sentences. They might have something to do with what I wrote, but at best only with one of several things that I wrote. I guess he thinks that if he can be seen to quote a post and write something -- anything -- then he can convince someone he's keeping up with the debate.

When Buddha says he doesn't have time to pursue everything he wants to, I tend to believe him. But I don't approve of how he budgets his time. He doesn't spend it on research or analysis. He just takes whatever little knowledge he has at hand and skips to the part where he produces the final result. He read perhaps one book on philosophy and then considers himself the ultimate philosopher. But all he can do is try to shoehorn everything into the topic of that one book. He clearly didn't study much biology before he wrote his book on evolution. He wants quick-and-dirty adulation, the illusion of erudition. Shortcuts work for a while, but then sooner or later you come up against a problem where you needed to have studied in depth before jumping to the conclusion. Sadly Buddha seems to have a disposition that precludes him from ever admitting failure.

His two most recent posts haven't been exactly content-free. It's just that the content is woefully naive and comically wrong. And he is probably just hoping most people will buy it without questioning it too much. But when he gets actual, in-depth criticism he runs away claiming it's all "irrelevant." All he ever does is pontificate and gaslight from a position of easily-seen ignorance. I shudder to think of the contexts he's been in where that might have actually worked for him.
JayUtah is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 1st September 2018, 03:45 PM   #292
halleyscomet
Philosopher
 
halleyscomet's Avatar
 
Join Date: Dec 2012
Posts: 8,980
Originally Posted by JayUtah View Post
All he ever does is pontificate and gaslight from a position of easily-seen ignorance. I shudder to think of the contexts he's been in where that might have actually worked for him.

Hopefully sales. In past jobs I’ve had the misfortune of working with people who made whatever promises they thought would get the sale. When reality hit they dealt with the impossibility of making any of those promises come true by gaslighting and lying to keep their commission at all costs.

I say “hopefully” because I don’t really want to contemplate the consequences of those tactics being used in designing any sort of a production or test environment. I’ve had to clean up some of those messes and it’s a nightmare, especially when the poor souls who were fooled are still under the sway of the incompetent consultant.
__________________
Look what I found! There's this whole web site full of skeptics that spun off from the James Randy Education Foundation!
halleyscomet is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 1st September 2018, 09:19 PM   #293
JayUtah
Penultimate Amazing
 
JayUtah's Avatar
 
Join Date: Sep 2011
Posts: 16,005
Originally Posted by halleyscomet View Post
Hopefully sales.

I say “hopefully” because I don’t really want to contemplate the consequences of those tactics being used in designing any sort of a production or test environment
Indeed, I've had plenty of experience with unbridled sales teams and clueless "consultants." Frankly I had something darker in mind -- interpersonal relationships. The term "gaslighting" comes from the Angela Lansbury film Gaslight, which deals with the eponymous behavior in a marriage. The incapacity to admit error in a relationship is heinously bad.

Years ago I helped produce an unaired pilot for the History Channel that involved interviewing the late Apollo hoax proponent Ralph Rene. The guy was bat-crap crazy, and if you could have put a face to the Stockholm Syndrom, it would have been his wife. This guy was utterly convinced he was the smartest guy on the planet and that evil forces were conspiring to prevent him from being recognized as such.
JayUtah is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 2nd September 2018, 03:24 AM   #294
Garrette
Penultimate Amazing
 
Join Date: Aug 2001
Posts: 14,635
Ingrid Bergman. Not Angela Lansbury. I mention this only because now I can say I got to correct JayUtah!

Great stuff as always.
__________________
My kids still love me.
Garrette is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 2nd September 2018, 06:53 AM   #295
Spektator
Watching . . . always watching.
 
Spektator's Avatar
 
Join Date: Jun 2002
Location: Southeastern USA
Posts: 1,625
Angela Lansbury was also in Gaslight, in the role of Nancy Oliver.
Spektator is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 2nd September 2018, 07:48 AM   #296
JayUtah
Penultimate Amazing
 
JayUtah's Avatar
 
Join Date: Sep 2011
Posts: 16,005
Originally Posted by Garrette View Post
Ingrid Bergman. Not Angela Lansbury. I mention this only because now I can say I got to correct JayUtah!
Well, yes and no. There were two adaptations made in 1940 and 1944. I remember that the better one was whichever one Angela Lansbury was in. Don't ask me why I don't remember that the better version is the Ingrid Bergman one, but it probably has something to do with it being Lansbury's introductory film role.
JayUtah is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 2nd September 2018, 08:47 AM   #297
Spektator
Watching . . . always watching.
 
Spektator's Avatar
 
Join Date: Jun 2002
Location: Southeastern USA
Posts: 1,625
Originally Posted by JayUtah View Post
Well, yes and no. There were two adaptations made in 1940 and 1944. I remember that the better one was whichever one Angela Lansbury was in. Don't ask me why I don't remember that the better version is the Ingrid Bergman one, but it probably has something to do with it being Lansbury's introductory film role.
With respect, let us settle this like gentlemen, sir. Angela Lansbury has a secondary role in the 1944 film,cast list here. She was not in the 1940 film, a British production, cast list here.
Spektator is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 2nd September 2018, 09:10 AM   #298
JayUtah
Penultimate Amazing
 
JayUtah's Avatar
 
Join Date: Sep 2011
Posts: 16,005
Originally Posted by Spektator View Post
With respect, let us settle this like gentlemen, sir. Angela Lansbury has a secondary role in the 1944 film,cast list here. She was not in the 1940 film, a British production, cast list here.
There's not much to settle as far as I'm concerned, but thanks for completing the research. The "yes and no" was in response to Garrette's "I got to correct JayUtah." Yes, because -- as you point out -- Lansbury has only a supporting role, not the lead. It would have been more proper to cite the leading lady. I will happily accept the correction to avoid future confusion. No because she is, in fact, in the film -- the one I regard as the better adaptation, and that's just the way I remember it. It's more probably more to do with the way I watch films. I tend to look for the first performances of actors who later became famous. Those associations then stick out in my mind.
JayUtah is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 2nd September 2018, 09:40 AM   #299
Spektator
Watching . . . always watching.
 
Spektator's Avatar
 
Join Date: Jun 2002
Location: Southeastern USA
Posts: 1,625
Oh, I was joshing. As a matter of fact, the 1940 version was almost lost. MGM, the studio that brought out the 1944 version, bought all rights from the British studio and the contract called for the British company to destroy all prints and negatives so there would be no competition. A good print did survive, and now both versions are available. Before the tangent ends, I admire your patience and cogency, Mr. Utah.
Spektator is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 2nd September 2018, 09:47 AM   #300
Wolrab
Illuminator
 
Join Date: Dec 2002
Posts: 4,357
Originally Posted by JayUtah View Post
You can see that he's devolved into quoting my entire posts -- especially the long ones that go into great detail -- and simply writing one or two dismissive sentences. They might have something to do with what I wrote, but at best only with one of several things that I wrote. I guess he thinks that if he can be seen to quote a post and write something -- anything -- then he can convince someone he's keeping up with the debate.

When Buddha says he doesn't have time to pursue everything he wants to, I tend to believe him. But I don't approve of how he budgets his time. He doesn't spend it on research or analysis. He just takes whatever little knowledge he has at hand and skips to the part where he produces the final result. He read perhaps one book on philosophy and then considers himself the ultimate philosopher. But all he can do is try to shoehorn everything into the topic of that one book. He clearly didn't study much biology before he wrote his book on evolution. He wants quick-and-dirty adulation, the illusion of erudition. Shortcuts work for a while, but then sooner or later you come up against a problem where you needed to have studied in depth before jumping to the conclusion. Sadly Buddha seems to have a disposition that precludes him from ever admitting failure.

His two most recent posts haven't been exactly content-free. It's just that the content is woefully naive and comically wrong. And he is probably just hoping most people will buy it without questioning it too much. But when he gets actual, in-depth criticism he runs away claiming it's all "irrelevant." All he ever does is pontificate and gaslight from a position of easily-seen ignorance. I shudder to think of the contexts he's been in where that might have actually worked for him.
As to the highlighted, we see this a lot. These appeals to lurkers, that just don't exist on this forum, are in so many threads where the poster pretends he is doing some public service by battling the big bad stupid closed minded skeptics.

Have any of them been proved correct? Has anybody...delurked.. and exclaimed that were swayed by a woo argument?

In the JFK and 9/11 sections, when somebody starts posting in another's defense, they can't make it past one or two points before their positions diverge into opposite convictions and don't support one another. They often then ignore each other and bloviate on their own versions of fantasy.



Just who are these lurkers that follow the ISF yet never seem to jump into a thread and proclaim their agreement with the woo de jour?
__________________
"Such reports are usually based on the sighting of something the sighters cannot explain and that they (or someone else on their behalf) explain as representing an interstellar spaceship-often by saying "But what else can it be?" as though thier own ignorance is a decisive factor." Isaac Asimov
Wolrab is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 2nd September 2018, 10:47 AM   #301
halleyscomet
Philosopher
 
halleyscomet's Avatar
 
Join Date: Dec 2012
Posts: 8,980
Originally Posted by Wolrab View Post
Just who are these lurkers that follow the ISF yet never seem to jump into a thread and proclaim their agreement with the woo de jour?

Well I for one have profound telekinetic powers. My abilities are such that it is a terror to behold. “The Great Turtle” from the Wild Cards Books was inspired by my burgeoning capabilities when I was younger. (The character in the books was toned down considerably to make him more believable.)

I am not coming to the defense of any of the “proofs” being offered here because, quite frankly, they’re embarrassing. I’m insulted by this garbage being passed off as genuine telekinetic power. Massaging the number that comes up in a flawed “random” number generator is hardly a display of telekinesis. It’s like pointing to a healed paper cut as proof that you can re-grow somebody’s arm. It’s pathetic. It’s a joke. If you’re going to prove you have telekinetic power you’re going to go all out. This half ass namby-pamby wishy-washy nonsense is just embarrassing to watch.

Posers. Posers all the way down.
__________________
Look what I found! There's this whole web site full of skeptics that spun off from the James Randy Education Foundation!
halleyscomet is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 2nd September 2018, 11:34 AM   #302
aleCcowaN
imperfecto del subjuntivo
 
aleCcowaN's Avatar
 
Join Date: Jul 2009
Location: stranded at Buenos Aires, a city that, like NYC or Paris, has so little to offer...
Posts: 9,252
Originally Posted by halleyscomet
Massaging the number that comes up in a flawed “random” number generator is hardly a display of telekinesis. It’s like pointing to a healed paper cut as proof that you can re-grow somebody’s arm. It’s pathetic.



and my beverage went through my sinuses .... that's Pith Award quality.
__________________
Horrible dipsomaniacs and other addicts, be gone and get treated, or covfefe your soul!These fora are full of scientists and specialists. Most of them turn back to pumpkins the second they log out.
I got tired of the actual schizophrenics that are taking hold part of the forum and decided to do something about it.
aleCcowaN is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 2nd September 2018, 02:03 PM   #303
JayUtah
Penultimate Amazing
 
JayUtah's Avatar
 
Join Date: Sep 2011
Posts: 16,005
Originally Posted by halleyscomet View Post
Massaging the number that comes up in a flawed “random” number generator is hardly a display of telekinesis. It’s like pointing to a healed paper cut as proof that you can re-grow somebody’s arm. It’s pathetic. It’s a joke.
"Ray, the sponges migrated about a foot and a half."

Quote:
This half ass namby-pamby wishy-washy nonsense is just embarrassing to watch.
Especially since this thread devolved from the start into the standard exercise of correcting Buddha's effluent ignorance. We haven't actually discussed a single proof for psychokinesis since the top half of the first page.
JayUtah is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 3rd September 2018, 03:59 PM   #304
Garrette
Penultimate Amazing
 
Join Date: Aug 2001
Posts: 14,635
Originally Posted by JayUtah View Post
Well, yes and no. There were two adaptations made in 1940 and 1944. I remember that the better one was whichever one Angela Lansbury was in. Don't ask me why I don't remember that the better version is the Ingrid Bergman one, but it probably has something to do with it being Lansbury's introductory film role.
And my one shot at glory fizzles on the launch pad. Don't mind me. I'll just be slinking away. And pouting.

Slinking and pouting. My life.
__________________
My kids still love me.
Garrette is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 4th September 2018, 05:35 AM   #305
Buddha
Thinker
 
Join Date: Jun 2018
Location: New York City
Posts: 210
Originally Posted by JayUtah View Post
Because you brought one up as a comparison to PEAR and Palmer.



No, that's not how it works. Anyone can claim to be anything as long as they don't have to demonstrate it. That's what you do. Here on this forum and elsewhere you've claimed all kind of expertise you can't ultimately back up. Instead, I demonstrate correct understanding. That way people can draw their own conclusions about whether I know what I'm talking about. They don't have to take my word for it.



Nonsense. You brought up a clinical trial as a direct comparison to an error you are claiming Dr. Palmer made. The discussion of categorical variables is necessary to show how subject pools are homogenized in such trials and how your error would have violated that homogenization. Then I showed how, under the same model, Palmer's actions actually had the opposite effect as your mistake, and served to achieve a homogenization that would be necessary for the aggregate statistics in PEAR's findings to have the meaning they intended. If you're claiming that tests for variable independence such as the chi-square are irrelevant to experimental psychology, that's just about as ignorant a statement as can be.

I don't care about the details of how to test drugs for cancer. You have a history of rambling on about irrelevant subjects instead of sticking to the point. You also have a history of insinuating that you have expertise, but declining to demonstrate it. Since you can't understand my argument or rebut it, you're desperately trying to gaslight the audience into thinking it is irrelevant.
This post shows that you have no idea how clinical trials are conducted. Well, you put your ignorance on full display. To start with, categorical variables are not used in clinical trials. The collected data consists of the analysis results of the subjects' blood, as it was in the leukemia clinical trials that I described. The conclusion that a patent was cancer-free was a part of his medical record that had no bearing on the conclusion presented to the FDA. My mistaken suggestion was to exclude all this patient's data from the study, which is similar to the one that Palmer made.

It seems to me that you are trying to impress your supporters by your erudition by bringing tons of irrelevant data into the discussion. But you do not have to work that hard, they are already on your side. As for me, this tactics is totally unimpressive because it shows a complete lack of originality.

Besides, you are unable to understand my request to provide a psychological studies data; I didn't say that a t-test or any other test cannot be used for the data evaluation, but I asked you to provide at least one example of a psychological study which contains a rejected outlier. You do not have to give me a link to that report, all you have to do to describe it in your own words. But I am sure you are unable to do that because such report doesn't exist.
Buddha is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 4th September 2018, 05:41 AM   #306
JayUtah
Penultimate Amazing
 
JayUtah's Avatar
 
Join Date: Sep 2011
Posts: 16,005
Originally Posted by Buddha View Post
It seems to me that you are trying to impress...
Wow, project much?

Quote:
As for me, this tactics is totally unimpressive because it shows a complete lack of originality.
That's some pretty desperate well-poisoning there, Buddha. How about you deal with what I actually wrote in all its detail instead of whining about what you think I haven't produced or what you think I don't know.

Quote:
You do not have to give me a link to that report, all you have to do to describe it in your own words.
I did. You said it was "irrelevant." But then again I also gave you the reference ti Zimbardo's book, which describes how it was done in one of most famous psychology experiments ever. You're not a very good authority on what your critics have or have not provided.

It's not a very good time for you to be haughty considering you've been ignoring Jeffers for weeks.

Last edited by JayUtah; 4th September 2018 at 05:44 AM.
JayUtah is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 4th September 2018, 05:45 AM   #307
Buddha
Thinker
 
Join Date: Jun 2018
Location: New York City
Posts: 210
Originally Posted by JayUtah View Post
But today you reversed that and claimed there was no "theory of telekinesis" that was being tested, therefore no way Dr. Palmer could have known what the inter-subject data would look like, and therefore no basis for him to determine that Operator 010's performance was anomalous.

Vague handwaving references to "statistical methods" don't address the problem that even under the theory you state there still should have been a normal distribution, and that anomalous data would still stand out against it. Nor, when I thoroughly provide the background that illustrates just how such integrity tests and homogenization procedures would be accomplished using inter-subject data, do you simply get to handwave it all away as "irrelevant." You made it relevant in posts such as these. Since you are unwilling to discuss the "statistical methods" beyond broad-strokes handwaving and content-free appeals to your own non-existent authority, we can dismiss your attempt to undermine PEAR's critics as shallow and uninformed.

But it's also worth pointing out that you're changing your story in order to enable reasons for you to sidestep challenges you can't meet. This is not honest debate.
Your suggestion that I changed my story shows that you have very little knowledge of the making of a theory. Take, for example, election surveys. There is no theory behind them, their purpose is to provide statistical data about candidates; standings in the polls. Similar to that, the Princeton study researchers did.t try to "prove their theory of telekinesis: because they had none, their goal was to collect the experiment data and evaluate it.

Either you are unable to understand my argument or you distort it on purpose. Lack of understanding and a deliberate distortion are equally bad, as you already know.
Buddha is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 4th September 2018, 05:51 AM   #308
JayUtah
Penultimate Amazing
 
JayUtah's Avatar
 
Join Date: Sep 2011
Posts: 16,005
Originally Posted by Buddha View Post
Your suggestion that I changed my story shows that you have very little knowledge of the making of a theory.
That's right, just keep hurling those insults. You have no argument in any of these threads except how supposedly stupid your critics must be.

Quote:
Take, for example, election surveys.
Irrelevant. We're talking about whether research into psychokinesis proceeds according to a theory about how it must work. Both Jahn in his principal research and Palmer in his criticism advanced various theories that guided their interpretation of the data. One theory was that the effect came in bursts and was therefore possibly short-lived. Another was that it proceeded involuntarily (i.e., to explain the anomalous calibration results).

Quote:
...and evaluate it.
The evaluation included speculation about how it might work. That's what points to theory. You abandoned the notion of theory in these studies only when it became apparent that it could be used to disregard outlying data -- which the researchers all eventually did, including Jahn. You didn't have these philosophical concerns about it until you saw how it could be used to undermine your beliefs.

Quote:
Either you are unable to understand my argument or you distort it on purpose. Lack of understanding and a deliberate distortion are equally bad, as you already know.
You need a better argument than, "My critics are such terrible people."

Last edited by JayUtah; 4th September 2018 at 05:54 AM.
JayUtah is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 4th September 2018, 05:55 AM   #309
halleyscomet
Philosopher
 
halleyscomet's Avatar
 
Join Date: Dec 2012
Posts: 8,980
Originally Posted by Buddha View Post
This post shows that you have no idea how clinical trials are conducted. Well, you put your ignorance on full display. To start with,


You've been caught lying and exaggerating too many times for anything you write to be taken at face value. You need to provide citations for your claims. Nobody takes your pathetic attempts at insulting people seriously. You're not credible.

Originally Posted by JayUtah View Post
You need a better argument than, "My critics are such terrible people."
If he stopped using that all he'd be left with was his Dunning-Kruger based diatribes.
__________________
Look what I found! There's this whole web site full of skeptics that spun off from the James Randy Education Foundation!

Last edited by halleyscomet; 4th September 2018 at 05:57 AM.
halleyscomet is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 4th September 2018, 06:00 AM   #310
JayUtah
Penultimate Amazing
 
JayUtah's Avatar
 
Join Date: Sep 2011
Posts: 16,005
Originally Posted by halleyscomet View Post
You need to provide citations for your claims.
Except that they're irrelevant claims. He can't talk about the PEAR research directly with anything approaching comprehension, so he tries to say, "It must be like clinical trials," which he claims to know about. Or, "It must be like elections," which he also might know something about. He's desperately trying to make this fit whatever little knowledge he might have about how to analyze other kinds collected data using statistics. I don't care about references to election data or cancer drugs. Those are distractions from his inability to handle the actual subject he raised.
JayUtah is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 4th September 2018, 06:10 AM   #311
Buddha
Thinker
 
Join Date: Jun 2018
Location: New York City
Posts: 210
Originally Posted by JayUtah View Post

tl;dr -- Buddha doesn't understand the statistics PEAR used to analyze their findings. He has conflated the incidental means of validating the baseline with the actual tests for significance against the baseline. Based on that misunderstanding, he accuses Palmer of erring when he recomputed the significance. I explain how the t-test for significance works and provide an example to illustrate what Dr. Jeffers found suspicious about PEAR's use of it.


Yes it is, contrary to what you think.



Yes he does, in the context of PEAR's research which intended -- correctly so -- to use the t-test for significance. Dr. Palmer explicitly notes that PEAR's decision to use the t-test instead of the Z-test improves over Jahn's predecessor Schmidt in studying the PK effect. I've mentioned this several times, but you never commented on it. I'm going to explain why it's an improvement, why PEAR was right to use it, why Palmer was right to endorse it's use, and why you don't know what you're talking about.



No.

If your intent is to use the t-test for significance, then baseline data must be collected empirically. It can't be inferred from theory. It can only be done after the project design, apparatus, and protocols are in place. Now where the baseline calibration factors are all static given the above, all the empirical baseline data may be collected prior to any experimental trials -- or even afterwards, as long as the baseline collection is independent of experiment trials. But if instead the calibration factors include environmental factors that cannot be controlled for except at the moment of trial, then it would be a mistake to compare experimental data collected in one environment at the beginning of the project to baseline data collected in a different environment as the project proceeds.

It is up to the judgment of the experimenter to know which factors apply. In this case Dr. Jahn, an experienced engineer, properly understood that the REG apparatus was sensitive to several environmental factors, only some of which he could control for explicitly. Hence the protocol properly required calibration runs at the time of trial. This is so that the collected data sets would be reasonably assured to be independent in only one variable.



No.

There is no magical rule that says that all baseline data must be collected prior to any experimental data, and absolutely no rule that says calibration runs may not interleave with experimental runs. You're imagining rules for the experimental sciences that simply aren't true. We know from your prior threads that you have no expertise or experience in the experimental sciences, so you are not a very good authority on how experiments are actually carried out. We further know that you will pretend to have expertise you don't have, and that your prior arguments depart from "rules" you invent from that pretended expertise and then try to hold the real world to.



Yes, in theory. The REG design is based on a physical phenomenon known to be governed principally by a Poisson distribution. That does not mean the underlying expectation transfers unaffected through the apparatus from theoretical basis to observable outcome. In the ideal configuration of the apparatus, and under ideal conditions, the outcome is intended to conform to a Poisson distribution to an acceptable amount of error.



Not just possible, known to confound. We'll come back to this.



The results will never "form" a Poisson distribution. The results will only ever approximate a Poisson distribution to within a certain error.



Specifically, if the machine is operating properly a Z-test for significance applied to the calibration run will produce a p-value less than 0.05. If the p-value is higher, it means some confound in the REG is producing a statistically significant effect.

But you misunderstand why this is of concern. You wrongly think it's because it is the goal of the experimenters to compare the experimental results to the Poisson distribution. Instead, an errant result in an apparatus carefully designed and adjusted to approximate as close as possible a Poisson distribution indicates an apparatus that is clearly out of order. This in turn indicates an unplanned condition within the experiment that cannot for that reason be assumed later not to have confounded in an unknown qualitative way with the experimental data. The Z-test for conformance to the Poisson distribution merely confirms that the machine is working as intended, not that the machine is working so well that the Poisson distribution can be substituted as a suitable baseline.



Well-known, but known not to be exhaustive. This is the part you're missing. Yes, the operator of an REG (or any other apparatus) will have a predetermined checklist to regulate the known confounds with the hope of reducing measured calibration error to below significance. That doesn't guarantee he will succeed in removing all error such that he can set aside measurement in favor of theory.



No.

Certainly a conscientious team will look for sources of error. But they know they cannot exhaustively do so, and that they will never reduce the Z-test p-value to zero. Nor is it possible to. They will only reduce the results of the calibration Z-test to a p-value that is acceptably small for their purposes. "Acceptably small" does not mean zero. It merely means they have confidence that the machine is working as expected. They know from the start that they are dealing with a Poisson process. The calibration merely ensures that the Poisson effect dominates the machine's operation.

If they were to use the Z-test to compare the experimental data to the idealized Poisson distribution, the error remaining in the calibration Z-test would still be a factor. And it would have been set aside in that method.

Let's say the calibration runs produce a p-value in the Z-test of p < 0.045. That's certainly enough to ensure that the machine is operating within tolerance. But the error still exists as a non-zero quantity. Using the Z-test combines that error with any variance in the experimental results such that they cannot be separated. When the expected variance in your experimental results is very small, this becomes a concern.

Hence the t-test for significance, which relaxes the constraint that the expected data conform to any theoretical formulation of central tendency. The data may incidentally conform, but that's not a factor in the significance test.



Yes, which is why the other tests for significance besides the Z-test exist. The rules for choosing a baseline in the t-test are that the baseline is determined to an acceptable degrees-of-freedom extent by some number of empirical runs where the test variable is not varied. The protocol for running the calibration is determined by known factors of the test, and looks at how the expected or known confounds are thought to vary. Jahn et al. expected the confounds to vary mostly by factors that would exhibit themselves only at the time of trial, and could only be partially controlled for by machine adjustment. Hence calibration runs interleaved with trial runs.

"Better than chance" in these contexts doesn't mean varying from the Poisson distribution. It means varying from the behavior that would be expected were not some influence applied. Whatever that behavior would have been is completely up for grabs. You don't have to be able to fit it to a classic distribution. You only have to be able to measure it reliably.



Which is why a test was developed that determines a baseline empirically, without the presumption that it would conform to some theoretical distribution. If you aren't sure which theoretical distribution is supposed to fit, or you know that no theoretical distribution will fit because of the nature of the process, then descriptive statistics provides a method for determining whether some variable in the process has a significant effect by comparing it against an empirically-determined baseline. The limitations of baselines determined empirically translates to degrees-of-freedom in the comparison, but does not invalidate it entirely. This is Descriptive Stats 101, Buddha. The fact that you can't grasp this simple, well-known fact in the field says volumes about your pretense to expertise.



The t-test requires no choice -- it always uses the t-distribution. You fundamentally don't understand what it is, why it's used, or how it achieves its results.



No.

This is comically naive, Buddha. You're basically arguing that the t-test itself is invalid, when it is actually one of the best-known standard measurements of significance.

No, the t-test does not require "inifinite number of runs" to establish a usable baseline. The confidence in the baseline is determined by the distribution of means in the calibration runs. The standard deviation of that metric determines the degrees of freedom, which is the major parameter to the t-distribution. The degrees-of-freedom flexibility in the t-distribution is meant to compensate for uncertainty in the standard deviation in the distribution of means in the calibration runs.

You don't compare the calibration runs to some idealized distribution. You compare them to each other. The central tendency of that distribution of means measures the consistency of the calibration runs from trial to trial. If the calibration runs are very consistent, only a few of them are needed. If they are not consistent -- i.e., the standard deviation of the distribution of means is large -- then many more runs will be required to establish a true central tendency.

But once you know the degrees of freedom that govern how much the t-distribution can morph to accommodate a different distribution, you know whether you have a suitably tight baseline.

Let's say you do twenty calibration runs, and for all of them the Z-test against the Poisson distribution produces a p-value in the range (0.044-0.046). That's approaching significance, but the p<0.05 threshold may be sacrosanct in your field. So you're good to go. Your confounds are just below the level of significance when compared to Poisson.

But instead we might find that the distribution of means in the calibration runs is extremely narrow. That is, the machine might be on the hairy edge of accurately approximating the Poisson distribution, but it could be very solidly within the realm of repeating its performance accurately every time. This is why the t-test is suitable for small data sets (in Jahn's case, N=23) where such behavior might be revealed in only a small number of calibration runs.

A small standard deviation in the baseline means translates to fewer degrees of freedom in the ability of the baseline to "stretch" to accommodate values in the comparison distribution of means. That means any data that stands too far outside the properly-parameterized t-distribution will be seen as significantly variant. That is, it is the consistency among the baseline runs, not their conformance to one of the other classic distributions, that makes the comparison work.

But what's more important is that any concern in the p-values of the Z-test on the calibration is irrelevant. Whatever was causing the machine to only-just-barely produce suitably random numbers was shown in the t-test baseline computation not to vary a whole lot from run to run. Whatever the confounds are, they're well-behaved and can be confidently counted on not to suddenly become a spurious independent variable. If the subject then comes in and produces a trial that varies at p < 0.05 in the t-test from the t-distribution parameterized from those very-consistent prior runs, that's statistically significant. If that subject's performance had been measured instead according to the Poisson distribution, then the effect hoped to be statistically significant would still be confounded with whatever lingering effect was causing the p < 0.046 etc. values in the calibration.

In your rush to play teacher, you've really shot yourself in the foot today.

First, I covered all this previously. It was in one of those lengthy posts you quoted and added to it a single line of dismissive rebuttal. You constantly attempt to handwave away my posts as "irrelevant" or somehow misinformed, but here you are again trying to say what I've already said as if you're now the one teaching the class. The way the Poisoning-the-Well argument technique works is that you're not supposed to drink from the same well. I explained how the t-test and its parameters worked, but now it's suddenly relevant when you decide to do it...

...and get it wrong. That's our second point. You fundamentally don't understand how tests for significance work. It's clear you've ever only worked with the basic, classic distributions and -- in your particular mode -- think that's all there could ever be. As I wrote yesterday, you're trying to make the problem fit your limited understanding instead of expanding your understanding to fit the problem. And in your typically arrogant way, you have assumed that your little knowledge of the problem, gleaned from wherever, "must" be correct, and that someone with a demonstrably better understanding of the subject than you -- the eminent psi researcher John Palmer -- "must" have conceived the problem wrong.

These are questioned intended entirely seriously: Do you ever consider that there are things about some subject you do not know? Do you ever consider that others may have a better grasp of the subject than you? Have you ever admitted a consequential error?

Third, now it's abundantly clear why you're so terrified to address Dr. Steven Jeffers. Your ignorance of how the t-test for significance works and achieves its results reveals that you don't have the faintest clue what Jeffers actually did. You're ignoring him because you don't have any idea how to even begin. It's so far over your head.

So the t-test for significance compares two data sets that are categorically independent according to some variable of interest (in PEAR's case, whether PK influence was consciously applied). All the potential confounds are expected to be homogeneous across the two sets. One data set is the calibration runs, represented by its mean and standard deviation. The other set is the experimental runs, similarly represented. The N-value (23, for PEAR) and the standard deviation in one distribution determine the degrees of freedom that the corresponding t-distribution can use to "stretch" or "bend" to accommodate the other distribution.

What Jeffers discovered was that PEAR's t-distribution for the calibration runs was too tightly constrained. Working backwards, this translates into not enough degrees of freedom, then into not a lot of variance in the calibration means for a sample size of 23. In fact, an absurdly small amount of variance. Too small to be possible from PEAR's protocol. Why? Because while the process underlying the the REG operation is theoretically Poisson, the process variable gets discretized along the way. Discretizing a variable changes the amount by which it can vary, and consequently the ways in which statistical descriptions of such variance can appear.

Let's say you ask 10 people to name a number between 1 and 10. We take the mean. Can that mean have a value of 3.14? No. Why not? Because our divisor is 10, and can never produce more than one digit past the decimal. It could be 3.1 or 3.2, but not 3.14. Do that 20 times, for a total of 20 means computed from groups of ten. If we aggregate the means, they can't vary from group to group by anything finer than 0.1. Data points will be either coincident or some multiple of 0.1 apart. If we look at the distribution of those means, there is a limit to how closely they can approximate a classic distribution because they are constrained by where they can fall in the histogram. They can fall only on 0.1-unit boundaries, regardless of how close or far away from the idealized distribution that is. All our descriptive statistics are hobbled in this case by the coarse discretization of the data.

All that occurs because the customary response to "pick a number between 1 and 10" is an integer. If we re-run the test and let people pick decimal numbers to arbitrary precision, then the group means can take on any real value, the aggregate of means can take on any value, and the distribution of those means across all groups has more flexibility to get close to a classical distribution. More importantly, the standard devision of that distribution has more places to go.

What Jeffers found was that the purported distribution of means in the calibration runs is not likely to have actually been produced by the REGs because it offered a standard deviation not achievable through the discrete outputs the REG offered, just like there exists no set of integers such that their sum divided by 10 can be 3.14.

I would like you to address Jeffers, the critic of PEAR you've been avoiding for weeks. I would like to see you demonstrate enough correct knowledge of the t-test for significance to be able to discuss his results intelligently, and at the same time realize that John Palmer is not misinformed as you claim. At this point you seriously don't know what you're talking about.
If you do not understand how a baseline is determined, I cannot explain it to you for the reason that is unknown to me. (this is a joke, do not take it seriously!) However, you wrote that a baseline cannot be based on theoretical considerations. I gave one example already of how this could be done, I can give more if you ask me to.
Once again, you brought plenty of irrelevant data into the discussion. For example, the goal of my post was not to discuss significance data tests but to concentrate on theoretical considerations that lead to a baseline.

I didn't say that t-tests are invalid for a simple reason -- I use them occasionally in my work. Your statement is a complete misrepresentation of my argument which you are unable to grasp, so you put the words into my mouth without realizing how ridiculous this makes you look.

Your example of calculation of average values has nothing to do with my presentation; once again you put your erudition on full display without realizing that this tactics doesn't work on every opponent; I guess, you used it successfully in the past but you should know that its success rate is less than 100%.

Do I ever accept that I am wrong? I already did it by discussing my mistake at this thread. Do I concede a defeat in some arguments? Yes I do, as I recently did in an argument with my cousin. Usually I win, but this time her presentation was irrefutable. But I can also detect my opponent's weaknesses, as I did in your case. So far you produced nothing but hot air.
Buddha is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 4th September 2018, 06:23 AM   #312
Thermal
Illuminator
 
Thermal's Avatar
 
Join Date: Aug 2016
Location: NJ USA. We Don't Like You Either
Posts: 4,126
Originally Posted by Buddha View Post
If you do not understand how a baseline is determined, I cannot explain it to you for the reason that is unknown to me. (this is a joke, do not take it seriously!) However, you wrote that a baseline cannot be based on theoretical considerations. I gave one example already of how this could be done, I can give more if you ask me to.
Once again, you brought plenty of irrelevant data into the discussion. For example, the goal of my post was not to discuss significance data tests but to concentrate on theoretical considerations that lead to a baseline.

I didn't say that t-tests are invalid for a simple reason -- I use them occasionally in my work. Your statement is a complete misrepresentation of my argument which you are unable to grasp, so you put the words into my mouth without realizing how ridiculous this makes you look.

Your example of calculation of average values has nothing to do with my presentation; once again you put your erudition on full display without realizing that this tactics doesn't work on every opponent; I guess, you used it successfully in the past but you should know that its success rate is less than 100%.

Do I ever accept that I am wrong? I already did it by discussing my mistake at this thread. Do I concede a defeat in some arguments? Yes I do, as I recently did in an argument with my cousin. Usually I win, but this time her presentation was irrefutable. But I can also detect my opponent's weaknesses, as I did in your case. So far you produced nothing but hot air.
Oh, this is some fine vintage horse****.
__________________
I am looking for other websites; you suck. -banned buttercake aficionado yuno44907
Thermal is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 4th September 2018, 06:24 AM   #313
Buddha
Thinker
 
Join Date: Jun 2018
Location: New York City
Posts: 210
I am going to address the topic of outliers because I forgot to mention one important thing.

The subjects for a test are usually chosen at random with the help of a table of random numbers. Once the choice is made, the subject data remains no matter what; otherwise a test becomes nonrandom and its results are no longer valid.

Palmer suggested that test results of the Princeton study regarding one subject should be discarded to show that they do not significantly deviate from the ones based on Poisson distribution. He should have known better that then all test results become meaningless because the requirement for randomization is no longer met.
Buddha is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 4th September 2018, 06:32 AM   #314
Buddha
Thinker
 
Join Date: Jun 2018
Location: New York City
Posts: 210
I still have time to respond to this:

"You don't compare the calibration runs to some idealized distribution. You compare them to each other. The central tendency of that distribution of means measures the consistency of the calibration runs from trial to trial. If the calibration runs are very consistent, only a few of them are needed. If they are not consistent -- i.e., the standard deviation of the distribution of means is large -- then many more runs will be required to establish a true central tendency.

But once you know the degrees of freedom that govern how much the t-distribution can morph to accommodate a different distribution, you know whether you have a suitably tight baseline."

In this case the distribution is not idealized; it was predicted that electron fluctuations comply with a Poisson process, and this was proven to be true in the experiments that have nothing to do with the Princeton experiment. For now I suggest to my opponent to read an elementary textbook on simple statistical tests.

Now I have to return to my work. I'll be back tomorrow.

This is not how t-test works. I will discuss this topic tomorrow because it is relevant the Princeton experiment
Buddha is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 4th September 2018, 06:38 AM   #315
JayUtah
Penultimate Amazing
 
JayUtah's Avatar
 
Join Date: Sep 2011
Posts: 16,005
Originally Posted by Buddha View Post
If you do not understand how a baseline is determined...
Except that I do. And unlike you, I can demonstrate my understanding.

Quote:
I cannot explain it to you for the reason that is unknown to me. (this is a joke, do not take it seriously!)
Except that you can't explain it. You don't know what the t-test for significance is, despite your claim to be a statistician. You think Palmer was wrong to use it, but you don't seem to realize that he used it because Jahn used it, and both researchers did so because it was the appropriate test in this case. You haven't even read the PEAR research, have you?

This is a serious comment. Do not attempt to dismiss it as a joke.

Quote:
However, you wrote that a baseline cannot be based on theoretical considerations.
No. I wrote that if no purely theoretical model is expected to govern the data, there are other ways to establish a baseline for comparison. One of them is the t-test that PEAR used. It is indicated especially when a study's N-value is small, as it was in the PEAR research.

Quote:
Once again, you brought plenty of irrelevant data into the discussion. For example, the goal of my post was not to discuss significance data tests but to concentrate on theoretical considerations that lead to a baseline.
You flat-out said that the method Palmer used to re-evaluate the data in the absence of Operator 010 was improper because it didn't use an appropriate baseline for comparison. The paragraph you quoted gave the results of his own significance testing, following Jahn's method. How then is a discussion of significance testing suddenly "irrelevant?" The "theoretical considerations that lead to a baseline" in this case is exactly the proper parameterization of the t-distribution to produce the proper degrees of freedom in it. If this is something you think is "irrelevant" to statistical analysis, then I really don't know what to say. You might want to consider revising your claim of what you do for a living.

Quote:
I didn't say that t-tests are invalid for a simple reason -- I use them occasionally in my work.
Utter nonsense. You clearly didn't know a thing about it until I mentioned it. You tried to argue that the data had to fit one of the previously-mentioned distributions, not the t-distribution. And you wrongly claimed it would take "infinite" trials to determine which, if any, of those simple distributions might apply. You tried to undermine the very basis of the t-test, so you don't get to suddenly assure us you know what it is and how to use it. But if now you're changing your story -- once again -- and telling us you recognize it as a valid test, then you have to explain why Palmer's use of it was so wrong.

Quote:
Your statement is a complete misrepresentation of my argument which you are unable to grasp, so you put the words into my mouth without realizing how ridiculous this makes you look.
I'll leave it to the readers to decide which one of us looks ridiculous.

Quote:
...you put your erudition on full display without realizing that this tactics doesn't work on every opponent...
Pointing out your errors certainly doesn't seem to be working on you. You keep telling us how criticism has no effect on you. You don't seem to realize that's not something you should be boasting about. Yes, I know what I'm talking about. That should be apparent by now. You won't be able to just bluster or backpedal around me, so kindly stop trying. If you simply be honest about what you know, and what you might realize during this debate you didn't know, things will be better.

Quote:
But I can also detect my opponent's weaknesses, as I did in your case. So far you produced nothing but hot air.
That's right, just keep gaslighting. You have no argument in any of your threads that rises above accusing all your critics of being stupid. Good luck with that.
JayUtah is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 4th September 2018, 06:42 AM   #316
JayUtah
Penultimate Amazing
 
JayUtah's Avatar
 
Join Date: Sep 2011
Posts: 16,005
Originally Posted by Buddha View Post
In this case the distribution is not idealized; it was predicted that electron fluctuations comply with a Poisson process, and this was proven to be true in the experiments that have nothing to do with the Princeton experiment.
I already discussed that. You didn't address what I discussed.

Quote:
For now I suggest to my opponent to read an elementary textbook on simple statistical tests.
It should be clear from my posts that I know what I'm talking about. You really need a better argument that constantly accusing your critics of being stupid.

Quote:
This is not how t-test works. I will discuss this topic tomorrow because it is relevant the Princeton experiment
Twenty minutes ago you said it wasn't. I guess in the meantime you must have Googled the t-test for significance and discovered that you couldn't bluster or gaslight your way around your error this time.
JayUtah is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 4th September 2018, 06:44 AM   #317
halleyscomet
Philosopher
 
halleyscomet's Avatar
 
Join Date: Dec 2012
Posts: 8,980
Originally Posted by Buddha View Post
In this case the distribution is not idealized; it was predicted that electron fluctuations comply with a Poisson process, and this was proven to be true in the experiments that have nothing to do with the Princeton experiment.
That's right ladies and gentlemen. Buddha is seriously claiming that the random number generator from one research project was completely reliable because of electron fluctuation testing done in a COMPLETELY DIFFERENT PROJECT. This claim is made despite multiple references to how the actual baseline derived from the actual equipment was flawed.

Buddha literally cannot differentiate between a theory and an implementation. This is not an insult or a personal attack, but a cold hard fact learned from reading his actual posts. He literally cannot understand how a flawed technology can result in data that does not match what one expects from the underlying theory.

Originally Posted by Buddha View Post
For now I suggest to my opponent to read an elementary textbook on simple statistical tests.
Based upon the profound and general ignorance Buddha exhibits, I suggest Buddha check out this Wikipedia page.

https://en.wikipedia.org/wiki/Dunnin...3Kruger_effect
__________________
Look what I found! There's this whole web site full of skeptics that spun off from the James Randy Education Foundation!
halleyscomet is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 4th September 2018, 06:53 AM   #318
JayUtah
Penultimate Amazing
 
JayUtah's Avatar
 
Join Date: Sep 2011
Posts: 16,005
Originally Posted by Buddha View Post
The subjects for a test are usually chosen at random with the help of a table of random numbers. Once the choice is made, the subject data remains no matter what; otherwise a test becomes nonrandom and its results are no longer valid.
Nope. The test doesn't become "non-random" just because N shrinks by one. The key thing about random numbers is that they are independent of each other, so N doesn't matter.

This is nonsense. You're claiming investigators can never reject data that may become obviously unusable as the experiment proceeds, without invalidating the whole study. As I described several days ago -- where you once again dismissed it as "irrelevant," the subjects are typically divided randomly into the control group and the variable group, over which it is hoped that other possible confounds will be evenly distributed. Random selection is the most effective way to do that. If, for reasons that the experimenters determine, one subject's data in either group is unusable, then obviously it is set aside and N becomes slightly smaller. Depending on N, the homogenization may become looser, but with large trials that's not a problem.

You're pulling out all the stops to come up with a reason why Operator 010 should remain in the data. You're completely disregarding that three other versions of the study, including one by Jahn himself, failed to replicate the findings and that Jahn accepted this.

Quote:
He should have known better that then all test results become meaningless because the requirement for randomization is no longer met.
No, this is just another of your attempts to take the professionals to task based on your misconception. This is why I asked whether you admit error. One of the first things you should be asking in this debate is whether one of the world's most well-known psychologists experimenting in psi phenomenon messed up basic experiment design, or whether an anonymous internet poster who constantly boasts about expertise he doesn't have might just be making a predictable mistake.
JayUtah is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 4th September 2018, 07:00 AM   #319
abaddon
Penultimate Amazing
 
abaddon's Avatar
 
Join Date: Feb 2011
Posts: 18,006
Buddha, some advice.

When you find yourself stuck in a hole of your own making, it is time to stop digging.
__________________
Who is General Failure? And why is he reading my hard drive?


...love and buttercakes...
abaddon is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 4th September 2018, 07:11 AM   #320
aleCcowaN
imperfecto del subjuntivo
 
aleCcowaN's Avatar
 
Join Date: Jul 2009
Location: stranded at Buenos Aires, a city that, like NYC or Paris, has so little to offer...
Posts: 9,252
Hey, pals, why do you lose your time replying to the recycled BS that "Buddha" drops here? He utterly failed pages ago.

So old the things he's dealing with here. It's so evident he only can read old posts in forums and rehash them here to make them look original. Take a walk by memory lane -the old threads here about this BS- and you'll see what I mean. That's why he can't address Jeffers' : he has nothing useful to copy or rehash about this paper that he could use to favour his argumentation, hence the fallacy of controlling the game field by not talking about it.

"Buddha". Yawn!
__________________
Horrible dipsomaniacs and other addicts, be gone and get treated, or covfefe your soul!These fora are full of scientists and specialists. Most of them turn back to pumpkins the second they log out.
I got tired of the actual schizophrenics that are taking hold part of the forum and decided to do something about it.
aleCcowaN is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Reply

International Skeptics Forum » General Topics » General Skepticism and The Paranormal

Bookmarks

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump


All times are GMT -7. The time now is 08:41 PM.
Powered by vBulletin. Copyright ©2000 - 2018, Jelsoft Enterprises Ltd.

This forum began as part of the James Randi Education Foundation (JREF). However, the forum now exists as
an independent entity with no affiliation with or endorsement by the JREF, including the section in reference to "JREF" topics.

Disclaimer: Messages posted in the Forum are solely the opinion of their authors.