ISF Logo   IS Forum
Forum Index Register Members List Events Mark Forums Read Help

Go Back   International Skeptics Forum » General Topics » General Skepticism and The Paranormal
 


Welcome to the International Skeptics Forum, where we discuss skepticism, critical thinking, the paranormal and science in a friendly but lively way. You are currently viewing the forum as a guest, which means you are missing out on discussing matters that are of interest to you. Please consider registering so you can gain full use of the forum features and interact with other Members. Registration is simple, fast and free! Click here to register today.
Tags statistical analysis , statistical methods , telekinesis

Reply
Old 4th October 2018, 10:14 AM   #921
abaddon
Penultimate Amazing
 
abaddon's Avatar
 
Join Date: Feb 2011
Posts: 18,214
Originally Posted by Buddha View Post
I am going to replay to several posts at once, including yours. In my opinion Jay’s posts are irrelevant to the discussion so I ignore them for most part, but you have a different opinion. If a person believes that Jay’s posts are useful, he/she should study them diligently; this is a matter of personal preference. My goal is to appeal to the audience as a whole, not to Jay. The audience is smart, and I think vast majority of the members understand my posts very well, and see that I provide the data relevant to the discussion, and reject the one that has nothing to do with it. This doesn’t mean that everyone agrees with me, but any intelligent member sees that I am not asking them to waste their time on evaluation of extraneous and useless information. The smart ones always win!
Lolwut? The smart ones have already won. Nobody agrees with you here? Your supposed army of lurkers never turned up.

By what criteria have you won anything?


Originally Posted by Buddha View Post
IHowever, I will respond to a remark by Jay – he wrote something about establishing a baseline in Jeffers’ study. This is not what I meant – I meant that the knowledge of a test’s purpose affects the test results in an undesirable way.
Nobody care about your desperate attempts to rehabilitate your failed argument. All anyone cares about is whether you can demonstrate the truth of your argument and you cannot.

Originally Posted by Buddha View Post
IAgain, Jay’s references to Palmer’s works are irrelevant because they do not provide any data of the treatment of outliers, which was my request. Other than that I do not see why I should read Palmer’s articles. If Jay provides links to any data regarding the use of outliers in psychological tests, I will gladly read the articles.
Whay is your fixation on Jay relevant?

Originally Posted by Buddha View Post
IOne more thing – Jay wrote that single- and double-slit distributions in Jeffers’s experiments are not statistical variables, as he calls them. If this is true, Jeffers did hell of a lot of useless work that is not needed for his experiments.
Why are you fixated on Jay? Nobody else believes you either.

[quote=Buddha;12453283]
Yesterday I quoted Palmer’s remark about the possibility of changing test results by blowing on a specimen. This article provides basic data on piezoelectricity.

https://www.nanomotion.com/piezo-cer...ectric-effect/"You clearly have no clue what any of that means. As a graduate engineer, I know for a fact that your claims are rubbish, and being familiar with Jay's CV from here and elsewhere, I know you are flat out wrong.

Originally Posted by Buddha View Post
ISome piezoelectric sensors are very sensitive and respond even to slightest winds, as in blowing. However, the other one are much less sensitive because they are built of different materials. A manufacturer chooses appropriate sensors for a specific application. Clearly, the sensors used to measure structural changes in metal are designed in such way that they do respond to the air movement in a lab.
Wow.

Originally Posted by Buddha View Post
IAs for a specimen’s position, it doesn’t affect the readings, contrary to Palmer’s suggestion.

:” Hasted expresses caution about these unwitnessed events but it is
difficult to explain them as fraudulent since personal communication
with Hasted (1984) has established that the search for hidden
confederates left little opportunity for concealment. Instrumental
records of four of these folding events ~re obtained (see apperrlix
I),” Isaacs, page 60.

Apparently, there was no possibility of fraud in Hasted’s lab.
Hello DK.

BTW what is an "apperrlix"?
__________________
Who is General Failure? And why is he reading my hard drive?


...love and buttercakes...
abaddon is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 4th October 2018, 10:22 AM   #922
steenkh
Philosopher
 
steenkh's Avatar
 
Join Date: Aug 2002
Location: Denmark
Posts: 5,426
Originally Posted by Buddha View Post
I am going to replay to several posts at once, including yours. In my opinion Jay’s posts are irrelevant to the discussion so I ignore them for most part, but you have a different opinion.
This is very unfortunate, because it is obvious that you would gain a lot by reading and understanding his posts.

Quote:
If a person believes that Jay’s posts are useful, he/she should study them diligently; this is a matter of personal preference. My goal is to appeal to the audience as a whole, not to Jay. The audience is smart, and I think vast majority of the members understand my posts very well, and see that I provide the data relevant to the discussion, and reject the one that has nothing to do with it.
I am sorry to tell you that you are very wrong in thinking that there is a "vast majority" that see that you provide relevant data. The recent poll showed that not a single one of your readers thought that you have done well.

You are probably not aware that you come across as ignorant of the subject, and choosing a sceptic forum to post in guarantees that you will get a critical audience, but you should feel lucky that you have also gained the attention of real experts, from whom you could learn a lot. I certainly have.

Most here do not think that ignorance in itself is inexcusable, but wilful ignorance is.
__________________
Steen

--
Jack of all trades - master of none!
steenkh is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 4th October 2018, 02:17 PM   #923
Thor 2
Illuminator
 
Thor 2's Avatar
 
Join Date: May 2016
Location: Brisbane, Aust.
Posts: 4,352
Originally Posted by Buddha View Post

.......

“In the most recent phase of his research, Hasted has shifted from
strain gauges to piezoelectric sensors (Hasted, Robertson, & Arathoon,
1983). As used by Hasted, piezoelectric sensors measure the rate of change
of stress rather than the level of stress per se. This makes them more
sensitive than the strain gauges to the rapidly varying pulses that seem to
* characterize the ostensible PK effects. However, in order to minimize
electrostatic artifact, Hasted had to eliminate much of this added
sensitivity by connecting the high resistance piezoelectric transducer
across a relatively low resistance (3.5 K ohms). Nonetheless, the overall
piezoelectric system was still more sensitive than the strain gauges to the
signals of interest.” Palmer, page 186

.............

Something a bit odd here.

It is a common mistake for lay people to confuse stress and strain but you wouldn't expect an engineer to confuse the two.

Quote:
Strain is the response of a system to an applied stress. When a material is loaded with a force, it produces a stress, which then causes a material to deform. Engineering strain is defined as the amount of deformation in the direction of the applied force divided by the initial length of the material.
Strain is the result of stress and the amount of strain varies dramatically depending on the material put under stress.
__________________
Thinking is a faith hazard.
Thor 2 is online now   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 4th October 2018, 03:59 PM   #924
JayUtah
Penultimate Amazing
 
JayUtah's Avatar
 
Join Date: Sep 2011
Posts: 16,393
Originally Posted by Buddha View Post
In my opinion Jay’s posts are irrelevant to the discussion so I ignore them for most part, but you have a different opinion.
Your judgment of irrelevance is not supported by the evidence. I have described in detail here and here how my posts are relevant, and I have supported that description with references to the literature and to your specific posts. You have not responded to those arguments. In contrast you have provided nothing but your say-so to justify ignoring my analysis of your argument. Have you considered the possibility that the audience is also considering the hypothesis that you are making excuses to avoid addressing challenges for reasons such as an inability to understand them or an inability to rehabilitate your criticism in light of them? What evidence do you offer to persuade them otherwise?

Let me remind you of the basis of your argument against PEAR's critics:
Originally Posted by Buddha View Post
There are several objections to this research; I am going to go over them:

1. Incorrect statistical methods were used to analyze the data.

2. The methods of analysis are correct, but the results were interpreted incorrectly

[...]

The first two objections are nonsensical, people who raised them do not know what they are talking about. As a data analyst, I use similar, although not exactly the same, methods to analyze stock market data, manufacturing data, advertisement campaigns data, etc., (I work for a consulting company)
The highlighted portion is a claim to special expertise. You are claiming to be professionally competent in the field of data analysis. You do not specifically define what that means, but you say you are confident that the methods you use are similar enough to the methods used by the authors and critics that you can claim expert understanding. We stipulate in any case that you are making a claim to expertise in both descriptive and inferential statistics.

Earlier you insinuated your expertise was in "mathematical statistics" while your critics at various web forums could be categorically considered not competent in that field. But then you also suggested that one had to be a "mathematician" in order to understand how your arguments refuted Palmer -- insinuating that none of your opponents could be expected to achieve that understanding because none of your opponents was a mathematician. This is problematic because, while you claim a B.S. in applied mathematics, you concede this does not qualify you as a mathematician. Further, you claim academic credentials as an engineer, but you also claim that engineers don't learn the kind of statistics needed to understand your critique of Palmer. It is unclear then upon what foundation you expect your claim of expertise to rest. It remains a collection of contradictory and ill-defined categorical assertions.

This examination is critical because your argument is of the form, "I am an expert in statistics, and as an expert in statistics it is my judgment that Palmer's statistical analysis is incorrect." You have made your claim to expertise a premise in the argument. You have even gone so far as simply to declare "Palmer is not a scientist," or "Palmer is not an expert in statistics" as the sum total of your response to him on any given point. You've treated Jeffers with similar categorical denials. You will note, by the way, that the categorical claims that Palmer is unqualified have been refuted. When your argument consists of little more than your purported expert judgment, then the foundation of expertise becomes really the only basis of refutation. And when this occurs, to do so is not an ad hominem argument or a personal attack. If you want the premise to stand, you must substantiate your expertise.

It is especially important in this case because we have examples of your previous arguments, which loosely followed the same form: "I am an expert in _____________, and as an expert I can say that my critics do not know what they are talking about." Those threads ended abruptly with your departure, after your critics in them demonstrated not only that they knew the subjects you were discussing, but also that they knew them better than you. Given that history, it seems prudent not to take any more of your claims to expertise at face value when they are made the premise of your argument. Hence we are testing your expertise. I can only imagine how an audience might regard such reluctance to have a premise tested.

Now you may say that your argument is not pure ipse dixit, not based on simple declarations of intellectual superiority, and have been documented all along the way with external references -- which, by the way, you demand that your opponents also provide before you will listen to them. We covered this already; your external references fall into a number of impotent categories. First, in some cases you provide such documented presentations only after one your opponents raises a topic. Then your presentation is little more than undirected and unneeded didactics. What meaning should an audience take away from the timing? Second, in some cases your reference merely defines or mentions a concept that appears in your argument and does not in the least support the argument you have made from it. It is as if your argument is, "See, this is a real concept, therefore what I say about it must be true." This is what happened when you tried to defend your home-grown concept of randomness (i.e., to preclude excising points from a data set). You merely linked to the Wikipedia article and ignored the argument your opponents drew up using that information to show that data must be excisable by the very nature of randomness. In short, you can't show that you understand the references you link to. That leads us to a third category: in some cases you cherry-pick items from a source while ignoring that the context firmly refutes what you plan to use that reference for. This was the case when you tried to document the proper use of the t-test. And finally, when you run across something like your discovery of the power transform, you try to interject it into aspects of the discussion where it clearly doesn't belong.

One of the questions an audience will want to answer is whether this is the way a real expert uses external citations. And that in turn bears on the propriety of a demand that others use them in the same way. As I mentioned before, pseudo-science writing often adds copious references to persuade the reader -- who typically never follows them -- that the work has been meticulously researched and documented. When we do follow up on the references, we find (as we have here) that they are not what the principal work purports them to be. The insinuation behind your request -- i.e., that an argument is not valid or rigorous unless it relies on external documentation -- is simplistic. In contrast you seem to be committing the converse error, creating an impression that the apparatus of external documentation by itself conveys rigor.

In connection with the above, you might also say that you've provided plenty of deductive reasoning and mathematical proofs to support your position, and that these should stand on their own regardless of whether or not your claims to expertise are well-founded. But as we discovered in the proof-of-God thread, you are not especially proficient in the propositional logic require to construct cogent proofs. You confused inductive proof with deductive proof, for example. You committed several other logical errors there, and you are committing a few here too. Here you seem to consider it deductively conclusive that Jeffers' double-slit experiment is invalid because a Fraunhofer diffraction pattern is observed not to be bell-shaped. This neglects the problem of your misconception of premise. The dependent variable in the double-slit experiment is not the diffraction pattern itself. Along the same lines you've tried to dismissively refute the mathematics of others. The hidden premise in those arguments is that you are competent enough in your interpretation of the math to defensibly correct others. We've shown in important cases here that you aren't, that you lack knowledge of relevant facts that would be appropriate to making the specific judgment. Thus in form, such a claim is merely a further claim to expertise with extra steps.

Given this overall state of affairs, you need to consider that your audience is also weighing the hypothesis that you refuse to discuss my posts with me because doing so might demonstrate that statistics is yet another topic on which you have claimed expertise that you could not demonstrate. By refusing, according to some pretext, to engage in meaningful criticism, a claimant can continue to enjoy some benefit of the doubt regarding his actual expertise, if doubt is what he needs in order to support that point. But he cannot insist that the audience believe his pretext.

Quote:
...this is a matter of personal preference.
No. Regarding the dependent variables in the two Jeffers papers, you are simply factually wrong.

You wrongly believed that a single-slit Fraunhoffer diffraction pattern formed a bell-shaped normal probability distribution that was then used as the dependent variable for the analysis in the papers. You did not know that a single-slit Fraunhofer diffraction pattern was multi-nodal. (A multi-nodal distribution cannot be modeled or approximated with Gaussian or Poisson processes.) You seem to have wrongly assumed this from a connection you made back when you were explaining Jahn. You correctly noted that the Poisson model governs the number of flowing electrons that pass a point in unit time. And you correctly noted that a Bernoulli process can, in the limit, be approximated by a Poisson distribution where λ=np. But you seem to have ignored all that passed in between those two invocations of Poisson in the actual study, and therefore wrongly connected them in a way that now precludes you from understanding Jeffers. This is what I explain in parts 1 and 4 of my series, which -- as you can see -- is clearly not irrelevant to your arguments.

This is the second time you have tried to sweep egregious errors of fact under the carpet and ask that we simply agree to disagree if necessary. This is not one of those cases where there isn't a clearly right and wrong answer. It has been shown using quotations from the papers themselves that you have wrongly attributed the dependent variable in the Jeffers studies. The author clearly states what they are and clearly states how he arrived at them, and those statements bear absolutely no resemblance to your representations. When your opponents are clearly in the right and you refuse to discuss the matter, there is no impetus for them to adopt a softer or more medial position.

Quote:
The audience is smart, and I think vast majority of the members understand my posts very well, and see that I provide the data relevant to the discussion, and reject the one that has nothing to do with it.
These challenges do not become irrelevant simply because you say so. They do not go away simply because you wish them to. The audience indeed sees you ignore posts, and they can see the reasons you give for doing it. But the four-part series you're trying so hard to make go away was actually written with them in mind, to illustrate in terms they would understand exactly in what way you have erred. The one thing you cannot do is stop an audience reading something that's put before them, understanding it, and seeing you assiduously avoid it.

The audience has seen you make such assertions before. They've seen you claim my posts are irrelevant, only to raise the same topic later when it suited you. They've seen you claim things were not done for which there was ample evidence, and seen you reluctantly retract those claims. In other words, you've given the audience plenty of reasons not to believe these edicts of yours. How much evidentiary merit do you think an audience will give repeated declarations that posts specifically tailored to refute your arguments are somehow irrelevant to them?

Quote:
This doesn’t mean that everyone agrees with me, but any intelligent member sees that I am not asking them to waste their time on evaluation of extraneous and useless information.
This verges on saying that if someone disagrees with your opinion, they are therefore not intelligent.

Let's keep in mind what you are asking your audience to do. After previous, fully-rebutted claims to expertise, you're asking them -- once more -- to accept you as an expert in a field, and to accept your judgment -- according to that expertise -- as evidence. When the basis of your judgment is challenged, you are asking them to set aside those challenges and keep following you on a journey of personal appraisal as if nothing happened. How does that achieve the goal of testing whether psychokinesis is a real phenomenon?

Quote:
The smart ones always win!
But it's also possible that those who convince themselves they won further convince themselves they did so because they're smart, or beautiful, or well-liked, or any number of a priori qualities. Winning, in the skeptical sense, consists of arriving at the most parsimonious explanation of the facts as they are constituted from time to time. That is what we are ostensibly doing in this thread -- determining whether the facts as presented in scientific research support a belief in the reality of psychokinesis. A victory along those lines doesn't care about intellectual superiority or prodigious debate skills, or any of the motives that seem to be buzzing about this thread like a cloud of angry insects. If a claimant's goal is merely to win, then the sure loser will be the truth.

Quote:
However, I will respond to a remark by Jay – he wrote something about establishing a baseline in Jeffers’ study. This is not what I meant – I meant that the knowledge of a test’s purpose affects the test results in an undesirable way.
Please link to the statements I make rather than vaguely recalling them or paraphrasing them to your liking. I won't respond to vague claims that I "wrote something about establishing a baseline." I've written many things about many subjects in this thread, with as much precision as my circumstances afford. I would consider it a kindness if you responded to those instead of what might be construed as straw men.

Here's what you wrote:
Originally Posted by Buddha View Post
This is an incorrect calibration procedure because the subjects were told the purpose of the experiment before the calibration, if I understood the procedure correctly (Jeffers didn’t say exactly what the subjects were told about the experiment before it started). It is incorrect because the knowledge affects the subject’s mental state and introduces a bias.
Keep in mind this is in reference to the single-slit experiment. First, the calibration runs are not the same as the baseline runs. So no, you don't understand the procedure correctly. "The equipment is allowed to run for long periods unattended; typically 10 hours overnight, generating 40,000 data sets. The calibration data recorded from these long runs has been analyzed in the following way in order to estimate realistic values for the smallest offset we can unambiguously recover from our experimental data. These data are not used to decide whether our human operators have influenced the equipment." [Jeffers and Sloan, op. cit. 1992, p. 345]

The baseline ("inactive") data were collected interleaved with test data, as I reported earlier. (I will note that you once claimed this would invalidate the results. Have you relaxed your objection to that?) There is no indication the subjects were told a baseline data set was being taken, or even what a baseline data set is. The paper clearly states that the display does not indicate what is happening aside from giving the subject instructions. The subject sees the machine work only when it is collecting active data. "Before the start of each run, 5 data sets are taken while a prompt is displayed stating the direction of effort desired for the upcoming bin in order to give the subject 5 seconds to get ready." [Ibid., p. 343]

You happily admit that you have no idea what exactly the subjects were actually told. Yet somehow you are able to make the determination -- without any study or analysis -- that it biases the result. In science bias is measured, not guessed at. You're attempting to drawing a scientific conclusion without having done the science. The effect of volition on PK ability is speculated, but not studied.

The first hitch in your plan to discredit Jeffers is that informed consent is a requirement of human subjects research in the United States. 45 CFR §46.116 "(a) Basic elements of informed consent. Except as provided in paragraph (c) or (d) of this section, in seeking informed consent the following information shall be provided to each subject: (1) A statement that the study involves research, an explanation of the purposes of the research and the expected duration of the subject's participation, a description of the procedures to be followed, and identification of any procedures which are experimental;" (emphasis added)

Further, Jahn followed essentially the same procedure. Which is to say, he was certainly beholden to the same federal regulations as Jeffers about what he had to tell his subjects about the research. And he had to at least tell the subjects what he wanted them to do -- try to affect the machine in a given way without touching it. And he took his baseline data interleaved with experimental data in very much the same way as Jeffers. Jeffers is no worse off in this respect as Jahn.

More importantly, none of these concerns has the slightest to do with "baselines" or calibration. As I said before, it is as if you are groping for some nefarious connection between disparate elements of the paper, which you now choose to talk about only in vague terms. Perhaps a word salad given under color of expertise would have the power to persuade a lay audience, but it is not objectively valid criticism here. It is valid to wonder about the potential causes of bias. But it is not valid to assume bias exists and has the result you desire, and it is not valid to try to connect that to calibration or baselines simply because those concepts are mentioned in the paper.

(Incidentally, I neglected to respond to this when it was posted several days ago:
Originally Posted by Buddha View Post
Rather than analyzing conditions of all experiments done by his scientific adversaries, I choose the conditions of Jahn’s experiment to show that Jeffers did not reproduce them correctly.
This is not the first time you've tried to misrepresent Jeffers in this way, and not the first time your opponents have corrected you. Jeffers stated plainly to Alcock (as your source Psi Wars indicates) that he did not intend to reproduce Jahn's experiment exactly. He stated as much also in the summary section of the double-slit paper. And, if the contention was that the Jahn experimental protocol was poorly designed, what was to be gained simply by repeating an unprobative protocol? Jeffers -- with assistance from Alcock, and Dobyns and Ibison from PEAR -- strove to create more defensible protocols, and PEAR was happy to have it.)

Quote:
Again, Jay’s references to Palmer’s works are irrelevant because they do not provide any data of the treatment of outliers, which was my request.
No. The obvious straw-man argument aside, none of the four-part series that challenges the foundation of your argument has anything to do with Palmer's treatment of outliers. Instead the series has to do with basic statistical modeling, which -- aside from the Operator 010 issue -- has been the bulwark of your criticism of Palmer and Jeffers. The last two parts especially deal specifically with your error in attributing the dependent variable in Jeffers' two papers.

Regarding the treatment of outliers, your request to produce citations from the literature was satisfied. Contrary to your promise, you did not read them; you merely cast admittedly uninformed aspersions against them and against me for referring to them.

Quote:
One more thing – Jay wrote that single- and double-slit distributions in Jeffers’s experiments are not statistical variables, as he calls them.
First, I never called them that.

Second, I never called them "distributions" either. While they are distributions in the strictest sense in which any ad hoc mapping of outcome to frequency is a distribution, the temptation is to conflate them with the parametrically defined distributions we've been considering, such as Gaussian, binomial, or Poission. More to the point, you are the one who wrongly thinks the single-slit diffraction pattern -- which you wrongly claim to be simply bell-shaped -- represents one of our parameterizable bell-shaped probability or frequency distributions. This is just as wrong as it can be, not merely a matter of differing opinion. Therefore to avoid any such confusion I have scrupulously referred to the product of the Fraunhofer diffraction model as either a "diffraction pattern" or an "interference pattern," the latter chiefly in regard to the double-slit model.

Another thing they aren't is the dependent variable in either of the Jeffers papers. Your argument suggests (and still does, below) that you believe they were. This was okay with you in the single-slit case where you paid attention only to the central node of the diffraction pattern and you could compare pictures to "confirm" that it was bell-shaped. Your consternation arose only when you couldn't make the double-slit diffraction pattern fit your incorrect assumption of Jeffers' statistical model, and then tried to make that Jeffers' fault.

Quote:
If this is true, Jeffers did hell of a lot of useless work that is not needed for his experiments.
Once again you're in the position of trying to figure out why a qualified practitioner would behave in the inexplicable way you think he did. You don't seem to seriously consider that the answer to the dilemma might be that what you think he did, and why, is not what he really did, or why. You can't or won't correctly name what the dependent variable was in either of these experiments, despite your claim to be an expert in data analysis. Regardless, this makes it hard for you to argue that you correctly understand Jeffers' papers. Understanding the material is, obviously, a necessary prerequisite to criticizing it meaningfully.

If you had followed the third and fourth installments in my series, it would have explained what work Jeffers did and why, and what the dependent variable ended up being in his single-slit and double-slit experiments. Others have already figured it out just by reading the papers themselves. Your wonderment here is bumping up hard against your claims that what I wrote is irrelevant. Not only is it relevant, it clears up the dilemma you're trying to pose today.
JayUtah is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 5th October 2018, 06:02 AM   #925
aleCcowaN
imperfecto del subjuntivo
 
aleCcowaN's Avatar
 
Join Date: Jul 2009
Location: stranded at Buenos Aires, a city that, like NYC or Paris, has so little to offer...
Posts: 9,432
Originally Posted by Buddha View Post
In my opinion Jay’s posts are irrelevant to the discussion so I ignore them for most part, but you have a different opinion.
I haven't got much time available until next Tuesday, but I just wanted to say that it has become quite obvious that you don't know how to reply Jay's posts, so they became "irrelevant" for you as a matter of need and not as a matter of fact.

Believe me, the notion you dropped here about a one-slit experiment energy density plot being applied a power transform and then used to predict a level of confidence has been one of the most amusing moments in this forum's history. I wish all of this thread readers could imagine these two things: that "energetic" Gaussian coloured to show a 95% probability, and the fact that such "Gaussian" should flutter like a flag -and stop being a Gaussian- to show any possible telekinetic effect -without any "power transform" or other out of place method available as an excuse to apply-.

Edited by Darat:  Moderated thread
__________________
Horrible dipsomaniacs and other addicts, be gone and get treated, or covfefe your soul!These fora are full of scientists and specialists. Most of them turn back to pumpkins the second they log out.
I got tired of the actual schizophrenics that are taking hold part of the forum and decided to do something about it.
aleCcowaN is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 6th October 2018, 07:58 AM   #926
Buddha
Thinker
 
Buddha's Avatar
 
Join Date: Jun 2018
Location: New York City
Posts: 249
Do you want to know the purposes of statistical analysis? This professionally written article provides basic data on statistical analysis and gives links to other articles covering this topic:

https://whatis.techtarget.com/defini...tical-analysis

I always provide high-quality data coming from reputable sources. I think that the board members are well-informed and smart enough to view with suspicion my own poorly qualified interpretations of the purposes of statistical analysis, so I keep my big mouth shut in this case.

“A thorough, albeit sympathetic, critique of Hasted's experiments has
appeared in an unpublished doctoral dissertation by Isaacs (1984). A
l particularly valuable aspect of this review is that Isaacs obtained
information directly from Hasted about certain procedural details which did
not appear in the latter's published reports.” Palmer, page 188

it seems strange that Palmer refers to this dissertation because it is very supportive of various ESP research programs, as I noted before. Apparently, Palmer could not find enough articles with strong critique of Hasted’s research, so he resorted to the sources sympathetic to Hasted.

“Because of the physical setup, it is hard to imagine how the subjects
could have physically bent the specimens while they were attached to the
"_ recording devices without detection by an experimenter (or, the video
recording, when used), or without leaving an obvious tell-tale trace on the
chart record. This comment does not apply to the twisted metal strips,
however, which were left unobserved in a room. In this case, documentation
is insufficient to rule out someone entering the room undetected and
manipulating the specimen. Although twists as tight as those observed seem
difficult to produce, even granting that shear forces are involved, the
*difficulty or possibility of mechanically producing such deformations cannot be assessed without extensive control tests.
In none of the cases is information given to reassure the reader that
either physical deformation of the specimens or substitution of an already
deformed specimen was precluded as a possibility at some point during the
session (e.g., before the specimen was mounted). In particular, I could
find no mention of specimens having been marked. Although no positive evidence of such manipulations exists, Hasted's lack of sensitivity to this
issue in his reports reduces the confidence one can place in the observed
deformations being truly anomalous. The fact that his subjects were
teenagers is not an argument against trickery being employed, although
Hasted sometimes implies that it is. Palmer, page 189.

Palmer keeps repeating the argument that tampering was possible in Hasted’s experiments. As for the possibility of bringing a bended specimen in to the lab, a subject would have to have an intimate knowledge of the equipment in order to insert a fake metal bar into it, which is highly unlikely if not impossible.

“The signals on the chart records could in principle be produced
* artifactually either by direct interaction with the specimen (or the
sensor(s) attached to it) or interaction with the peripheral devices (i.e.,
amplifiers, chart recorder, etc.). Possibilities for direct interaction
with specimen and sensor include touch, air currents (e.g., blowing on the
specimen), auditory stimuli (e.g., ultrasonic sounds), thermal stimuli, and
localized electrical signals.” Palmer, page 190

I have already explained that such interference with the experiments is impossible if one takes into account the manufacturer’s preventive measures designed to make possible to run experiments under the lab conditions and even outdoor conditions without undesired effects. Imagine what kind of garbage a CERN experiment would produce if a manufacturer were not careful enough to address Palmer’s concerns – it would be hundreds of ghost particles that cannot possibly exist under any conditions.

I will be back on Tuesday.

Happy Columbus Day !!!
Buddha is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 8th October 2018, 05:15 AM   #927
Garrette
Penultimate Amazing
 
Join Date: Aug 2001
Posts: 14,693
Originally Posted by Buddha View Post
Do you want to know the purposes of statistical analysis? This professionally written article provides basic data on statistical analysis and gives links to other articles covering this topic:

https://whatis.techtarget.com/defini...tical-analysis

I always provide high-quality data coming from reputable sources. I think that the board members are well-informed and smart enough to view with suspicion my own poorly qualified interpretations of the purposes of statistical analysis, so I keep my big mouth shut in this case.

“A thorough, albeit sympathetic, critique of Hasted's experiments has
appeared in an unpublished doctoral dissertation by Isaacs (1984). A
l particularly valuable aspect of this review is that Isaacs obtained
information directly from Hasted about certain procedural details which did
not appear in the latter's published reports.” Palmer, page 188

it seems strange that Palmer refers to this dissertation because it is very supportive of various ESP research programs, as I noted before. Apparently, Palmer could not find enough articles with strong critique of Hasted’s research, so he resorted to the sources sympathetic to Hasted.

“Because of the physical setup, it is hard to imagine how the subjects
could have physically bent the specimens while they were attached to the
"_ recording devices without detection by an experimenter (or, the video
recording, when used), or without leaving an obvious tell-tale trace on the
chart record. This comment does not apply to the twisted metal strips,
however, which were left unobserved in a room. In this case, documentation
is insufficient to rule out someone entering the room undetected and
manipulating the specimen. Although twists as tight as those observed seem
difficult to produce, even granting that shear forces are involved, the
*difficulty or possibility of mechanically producing such deformations cannot be assessed without extensive control tests.
In none of the cases is information given to reassure the reader that
either physical deformation of the specimens or substitution of an already
deformed specimen was precluded as a possibility at some point during the
session (e.g., before the specimen was mounted). In particular, I could
find no mention of specimens having been marked. Although no positive evidence of such manipulations exists, Hasted's lack of sensitivity to this
issue in his reports reduces the confidence one can place in the observed
deformations being truly anomalous. The fact that his subjects were
teenagers is not an argument against trickery being employed, although
Hasted sometimes implies that it is. Palmer, page 189.

Palmer keeps repeating the argument that tampering was possible in Hasted’s experiments. As for the possibility of bringing a bended specimen in to the lab, a subject would have to have an intimate knowledge of the equipment in order to insert a fake metal bar into it, which is highly unlikely if not impossible.

“The signals on the chart records could in principle be produced
* artifactually either by direct interaction with the specimen (or the
sensor(s) attached to it) or interaction with the peripheral devices (i.e.,
amplifiers, chart recorder, etc.). Possibilities for direct interaction
with specimen and sensor include touch, air currents (e.g., blowing on the
specimen), auditory stimuli (e.g., ultrasonic sounds), thermal stimuli, and
localized electrical signals.” Palmer, page 190

I have already explained that such interference with the experiments is impossible if one takes into account the manufacturer’s preventive measures designed to make possible to run experiments under the lab conditions and even outdoor conditions without undesired effects. Imagine what kind of garbage a CERN experiment would produce if a manufacturer were not careful enough to address Palmer’s concerns – it would be hundreds of ghost particles that cannot possibly exist under any conditions.

I will be back on Tuesday.

Happy Columbus Day !!!
Summary paraphrase by analogy:

"My claims about the literary value of 'Moby Dick' having been shot down, I now provide a link to Umberto Eco's 'How to Write a Thesis,' neglecting to mention that I have not read it and that it does not support my position."
__________________
My kids still love me.
Garrette is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 8th October 2018, 06:55 AM   #928
steenkh
Philosopher
 
steenkh's Avatar
 
Join Date: Aug 2002
Location: Denmark
Posts: 5,426
Originally Posted by Buddha View Post
Do you want to know the purposes of statistical analysis? This professionally written article provides basic data on statistical analysis and gives links to other articles covering this topic:

https://whatis.techtarget.com/defini...tical-analysis
Considering the expert level that your opponents have demonstrated here for a long time, don't you think it is a little late to bring a link to an introductory blurb with a Powerpoint presentation? Or is this the level you would like to continue at?
__________________
Steen

--
Jack of all trades - master of none!
steenkh is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 8th October 2018, 08:02 AM   #929
theprestige
Penultimate Amazing
 
Join Date: Aug 2007
Posts: 30,951
Buddha, is the telekinesis real?
theprestige is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 8th October 2018, 09:54 AM   #930
JayUtah
Penultimate Amazing
 
JayUtah's Avatar
 
Join Date: Sep 2011
Posts: 16,393
Originally Posted by Buddha View Post
Do you want to know the purposes of statistical analysis?
No.

What we want from you is a substantiation of your claim to expertise in statistical analysis and experiment design, the basis by which you rejected Alcock, Palmer, and Jeffers. More specifically, we want you to reconcile that claim with the evidence we have presented here of clear and egregious error on your part. Those errors suggest you do not understand the works of those authors to a degree sufficient for your opinion of them to have evidentiary merit.

This:
Originally Posted by Buddha View Post
Double-slit diffraction doesn’t produce a Poisson process, instead it produces a diffraction pattern (wave interference pattern). Since this is not a Poisson process, the t-tests are not applicable to it. But Jeffers used a t-test to draw the conclusion that the experiment debunks the Princeton research.
confirms that you have no idea what Jeffers actually studied. You are entirely factually wrong about what the dependent variable was in Jeffers' double-slit experiment. Further, the statement belies any claim to conceptual competence in the construction of a statistical model, which lies at the foundation of everything you've discussed regarding PEAR and its critics. This must be addressed before you can claim victory over those critics.

Quote:
This professionally written article provides basic data on statistical analysis and gives links to other articles covering this topic:

https://whatis.techtarget.com/defini...tical-analysis
This appears to be yet another thing you Googled in haste. The site is a basic business encyclopedia. The article's author, Margaret Rouse, is not a statistician. She taught computer science and IT integration in New York public schools. The links to "other articles" are merely links into other articles in their encyclopedia, all intended for non-statistician business people and having nothing to do with experimental psychology. The one respondent in the comment section doesn't seem to consider it a useful or correct article. I would hesitate to apply the label "professionally written" to this as an indication of its depth, correctness, or quality.

The only meaningful link is to a YouTube video. The jury is still out over whether YouTube counts as a reputable source. The channel owner, who appears to be narrating the video, is not a statistician. Dr. Amanda Rockinson-Szapkiw is an expert in distance learning. Insofar as she says up front she is abstracting her presentation from "Warren" (by which we understand her to mean any or all of the excellent works of Carolyn Warren), we can consider it reasonably well founded. Further, based on the content presented, I accept Rockinson-Szapkiw as a competent interpreter of Warren.

I do not accept you in that role, however.

The question is not whether someone else understands statistics. The question is whether you can demonstrate that you understand the works you have criticized. The answer to that question cannot be supplied by any external authority. In the past you haven't always shown a thorough knowledge of the sources you cite. Did you watch the linked video? Especially the part about how to use t-tests appropriately? How to design an experiment?

One of the topics Dr. Rockinson-Szapkiw covers is pilot studies. (I'm going to abbreviate her last name as R-S hereinafter.) She describes one way to do them as varying certain things just to see whether they have an effect on the outcome -- much as Jeffers and Ibison did with the double-slit apparatus. You wrongly characterized that research as merely duplicating Jahn's, and tried to take him to task for not following Jahn's protocol. The paper expressly stated a different intent, one that Dr. R-S covered in describing how one may do research.

She discusses dependent and independent variables and the different types of variables in terms of how to analyze them, a topic you had quite a bit of difficulty with when we were discussing categorical variables. You wrongly confused them with ordinal variables, and Dr. R-S would have cleared up that misconception on your part. There are a number of categorical variables in the PEAR work. You will need to demonstrate that you understand how they work before you can claim that others handled them improperly.

Let's transform the rest of this discussion into a quiz. (I will ask the indulgence of the other contributors to give Buddha a fair chance to research and respond to these before jumping in themselves.)
  1. Is the Jahn REG experiment an example of in-group experiments or between-group experiments?
  2. Randomization is a technique that applies to which of the two experiment classes in the previous question?
  3. What does Dr. R-S say is one possible consequence of applying parametric analysis techniques to data sets with extreme outliers?
  4. What alternatives to randomization does Dr. R-S describe?
  5. On what type of data in Jahn's REG experiment would the chi-square statistic be appropriate?

These are not mere gotcha! questions. These are issues brought up in the video that relate to specific errors you have committed in this thread while attempting to discredit Dr. John Palmer and Dr. Stanley Jeffers. I categorized and discussed previously your approach to external authority. You didn't address it or defend against it, and today is yet another example of the category in which you seem merely to skim the offering and thus fail to realize how much it hurts your case when examined more carefully. I feel that I have to draw attention to your invocation of authority that refutes or contradicts you. Any claim to competence or expertise in the relevant field requires a reconciliation of that dissonance.

This is what experts can do. They can take material presented by other authorities and speak knowledgeably about how that material applies to the question under discussion. This is what I can do with respect to Dr. R-S (who is here a proxy for Warren) and the various studies we have discussed regarding PEAR. If your argument is to survive, you must do likewise. Can you answer those questions? Can you reconcile those answers with the arguments you have made in this thread? Try as you might to discredit the four-part series I posted, it is still relevant and still points out your errors. It is not going away.

Quote:
I always provide high-quality data coming from reputable sources.
The quality of your sources varies, but generally centers around convenience sources easily accessible from Google by a brief search -- certainly not the standard works. I won't belabor your already-rebutted approach to external authority today more than I already have.

Quote:
I think that the board members are well-informed and smart enough to view with suspicion my own poorly qualified interpretations of the purposes of statistical analysis, so I keep my big mouth shut in this case.
That policy seems to departure from your previous posture.
Originally Posted by Buddha View Post
Some mathematicians dispute the research's validity, but, as always, there is no complete agreement about it; some mathematicians support it. Well, I am not going to quote them because I am quite able to analyze it myself.
Your argument has been, as I reported earlier, that as an expert in statistical analysis you are qualified to declare that Alcock, Palmer, and Jeffers have conducted nothing more than an incompetent, biased, and dismissive analysis of Jahn and PEAR. Now as far as that argument goes you are either an expert or you aren't. That is, you are either an expert and can demonstrate on command the expertise upon which your judgment against PEAR's critics relies, or you are not an expert and hence must rely on outside authority to demonstrate their expertise, whereupon the credit for refutation -- if any -- is theirs and not yours. Your introductory tutorial purporting the purposes of statistical analysis does not redeem your errors. As such it's of only of passing relevance. However, if such a summary were needed in some circumstance, I would expect that an off-the-cuff presentation along those lines would be something a statistics expert would be quite expected and able to do. That's the essence of expertise -- the ability to use one's own knowledge and experience to address specific questions.

Whether you claim to be an expert or not, you are still required to demonstrate that you understand the material you propose to criticize if you want that criticism to have any evidentiary merit. We have shown at length that you can't do that with the Jeffers papers, with the Jahn paper, with Alock's book, or with Palmer's summary. This means your comments do not rise to the level of refutation, even when they manage to rise above mere declarations of your critics' "obvious" inferiority. And if you now mean to recharacterize your approach as "poorly qualified interpretations," I am prepared to accept that revision. I believe I have shown that you do indeed have difficulty understanding the purpose of specific research experiments, and this casts doubt on the value of your criticism.

I abstract your argument today as, "I provided a citation and my opponent did not, therefore I win." This is simply not how external authority works, how debate works, or how your argument in this thread works. No external document can possibly prove you know what you're talking about.
JayUtah is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 8th October 2018, 10:35 AM   #931
JayUtah
Penultimate Amazing
 
JayUtah's Avatar
 
Join Date: Sep 2011
Posts: 16,393
Originally Posted by aleCcowaN View Post
Believe me, the notion you dropped here about a one-slit experiment energy density plot being applied a power transform and then used to predict a level of confidence has been one of the most amusing moments in this forum's history.
Many times I've seen claimants simply refuse to believe that they can be that wrong. Most search for some de minimis way in which they may have committed a minor error, but will not address the glaring fundamental errors they make. You understand that Buddha is that wrong. I would venture that some others who have either read the Jeffers papers or my four-part series or both also understand that he is that wrong. Objectively according to the facts he is that wrong. The exercise as I see it is to get Buddha to see how and why he is that wrong and restart the argument with that new understanding.

Part of that new understanding, naturally, would be not to base an argument on claims to expertise that are then undermined by egregious conceptual missteps. But that could be construed as too personally directed, so we tread carefully there. Regardless of any claims to expertise, the most useful correction would be simply to look more constructively at the arguments and refutations and back away from the climate of gladatorial combat to the death that has resulted in this thread being moderated. Effective arguments, and tests of arguments, cannot be had if the standing refutation is simply "My opponents don't know what they're taking about," followed by no further discussion. In many cases we can demonstrate that we do know what we're talking about.

In this case I think I can even determined how Buddha's line of reasoning went off the rails on the issue of the dependent variables. I may be wrong, of course. I can't see into his head. But here's what I think happened.

Early in the discussion he posted this:

Originally Posted by Buddha View Post
Fluctuations of electrons from the surface of a metal form a Poisson distribution, as the theory shows. In Princeton study the baseline is 0.5 (actually, this is a Bernoulli trials process with the limiting case being Poisson distribution).
The statement in blue is true enough. ("Formed" is not quite accurate. "Can be approximated by..." is better.) The statement in red is also true. They just have nothing directly to do with each other as the design of Jahn's experiment went. They're separately true for different reasons, and Buddha seems to have melded them together as a connection that all valid models must have. Under that misconception, the perception that a hastily-inspected Fraunhofer single-slit diffraction pattern casually resembles a bell-shaped normal probability distribution would tend to reinforce the error that the underlying physical process is itself the dependent variable -- or at best that the underlying process and the dependent variable must share that resemblance. And under that same misconception, the double-slit pattern is directly unsuitable as a dependent variable for analysis that requires a bell-shaped distribution. The original mistake appears to be to have conflated two different cases of a Poisson distribution.

That's the tl;dr ("too long; didn't read") summary.

Let's get into it. First a concession. The statement in red correctly describes the relationship between a Bernoulli process and the Poisson distribution. Previously I represented Buddha's analysis of Jahn as failing to understand the dependent variable. It's clear upon rereading his initial statement that he understands it to a higher degree than I gave him credit for. I therefore retract the accusation and apologize. This is not to say he has not committed a conceptual error here. It's just not the error I accused him of.

A Bernoulli process is what I described in part 1 of my series. I didn't call it that because I was trying to avoid jargon and therefore make it accessible to a lay audience. Tossing a coin many times is a Bernoulli process, a process whose variable has only two outcomes and is governed by a probability. The number of successes in many runs of such a process is known to tend toward the Poission distribution. In fact, a number of textbooks even present and derive the Poission distribution right off the bat in terms of the binomial distribution, the actual distribution defined by the Bernoulli process, notated B(n,p).

Now the statement in blue. That's describing the quantum nature of electricity. Don't glaze over; this is one of the easiest and most intuitive things about quantum mechanics. We will stay far afield from quantum electrodynamics. It comes down to there being such a particle as an electron, and that electron having a fixed amount of electromagnetic charge -- negative charge, to be specific. From that it follows that at a teeny-tiny scale of observation, electricity flows in discrete increments, the amount occurring in single electrons.

The Poisson distribution is, in its simplest exposition, governed by a single parameter λ, the rate at which success is expected over some fixed interval -- time, space, amount of beer. The distribution that results from its probability mass function describes the probability that a specified number of successes (which may differ from the expected number) will occur in that same domain -- the same time, the same space, the same amount of beer. If you habitually order beer from Skeezy Brewery, and it has -- on average -- 2 spiders in each 20-liter keg, the Poisson distribution lets you estimate the probability of ordering a keg for your sister's wedding that has zero spiders. It relates to the binomial distribution roughly in the sense that each cup of beer drawn from the keg is a binary trial -- you either get a spider or your don't, and in some very unfortunate cases you'll get both spiders in one cup. (The mother-in-law gets that one.) Clearly the smaller the cup you use, the vanishingly small become the odds you'll get the Mother-In-Law shot. Reducing other kinds of outcomes to Bernoulli sequences by slicing them finer and finer is what "in the limit" means in this context.

If you're measuring electronic current in milliamps, such as in a typical handheld electronic device, the number of electrons passing a fixed point in a circuit in any given second is still on the order of 1015 electrons. Quantum-level effects on the current measurement aren't even slightly significant compared to effects say, from temperature. The spirit of Poisson being a "limiting case" is what happens when we start measuring it at finer and finer time slices and at precisions approaching being able to measure individual electrons.

For some current of A amperes, which for practical purposes we’ll say is very, very small, the expected electron flow for some slice of time is λ(A). For the purposes of explanation, we’ll say that the current is so very low and the time slice so narrow that the value λ(A) is a small integer -- say, 100. The Poisson distribution lets us know the probability that the actual electron flow in that time slice will be 99 or 103 or 27.

That's the basis of a noise circuit. And the distribution of those numbers will also start to look like a Poisson distribution clustered around the expected rate of electron flow in an A-milliamp circuit, λ(A) with allowance for sampling rate.

If it seems like I'm writing part 1 of my series again, it's because I am. Relevance indeed.

The trick for the electrical engineer is to sample a tiny flow of current at colossally high speeds with enough sensitivity to essentially measure individual electrons -- or as close to that as you can get. But for useful electrical circuits, such as in Jahn's REG machine, the measurement of tiny changes in charge has to be translated to useful changes in some other electrical property such as voltage or resistance, and at a much higher scale. Let's say we have the setup similar to the above, where we sample the electron passage in groups of 100, such that each of our samples produces a number between 0 and 100 indicating how many electrons passed our measuring point in that interval. Let's arbitrarily say that interval is 10-12 second. Through the magic of electronics, let's say we're able to scale that "signal" that has 100 discrete values to a voltage output that varies between 1 volt (at signal = 0 electrons) to 1.5 volts (at signal = 100 electrons). The mean voltage would be derived from λ(A), and I haven't specified enough of the solution here to give it a numerical value. It doesn't matter what it actually is for the purpose of illustration.

Now back to red. The coin-toss here doesn't have the slightest thing to do with individual electrons. All it is, in the Jahn experiment, is whether that output signal, sampled at some suitable rate, is presently above or below the mean. The Bernoulli process that gives rise to the blue statement is the occurrence of a single electron at the measuring point. The Bernoulli process that gives rise to the red statement, the dependent variable in the Jahn experiment, is whether the aggregated flow of electrons is presently above or below its mean value at some instant.

"But Jay," you might say. "You're just obfuscating. You're measuring electron flow either way. Therefore they are equivalent."

No, they aren't. They are quantitatively correlated in this case, but not qualitatively dependent. The indirection is key. Converting the binary flow of individual electrons to an aggregated noise, and from that noise deriving a new binary variable -- which then itself can be approximated with a different Poisson -- is utterly at the core of Buddha's error. Clipping and sampling are not the only things we can do statistically with a noise signal. We could, for example, measure the sequence of peak voltages as numbers. This would be an entirely new and different sequence of values derived from that signal. In fact the world of signal processing provides us with a cornucopia of derivative signals: frequency distribution, time-derivative, time-integral, dwell time, coherence to stable clock -- the list goes on and on. And the variables we derive don't have to be binary-valued, or even scalar-valued. It is merely an accident of design that the derivation Robert Jahn chose can be approximated by the same kind of formalism as the quantum behavior that began the process.

Jeffers' derivations are equally robust. But because the physics that governs diffraction is not at its heart Bernoullian, and the resulting derivation is not Bernoullian, The indirection -- or derivation -- is easier to see in all its glory in those experiments than in Jahn's. The dependent variable in Jahn's experiment is the number of high-clipped, uniformly-sampled peaks in the noise function over k samples. It doesn't matter for the sake of the experiment how the noise function was created. Buddha appears to have been misled by the coincidence of them both being approximable via Poisson, and assumed a need for that always to be the case. It doesn't always have to be the case.

The dependent variable in the single-slit experiment was the horizontal displacement of the centroid of the diffraction pattern. Unlike in the Jahn experiment, in which the total number of bits and the number of 1-bits were always integers, the x-coordinate of the centroid was a continuous-valued variable. By no means does that make it statistically untractable. And by no means is the experiment invalid because the derivation of the dependent variable is governed by different mathematics than the physical process that produced the diffraction pattern.

The dependent variable in the double-slit experiment was the normalized difference in intensity between brightest and darkest horizontal bands in the diffraction pattern. Again this is a continuous variable. Again it doesn't matter that the mathematics behind Fraunhofer diffraction itself don't have anything to do with normalized contrast. The need for qualitative congruence between the underlying process and the dependent variable is something Buddha invented out of whole cloth because it was accidentally congruent in the one experiment he studied prior to meeting PEAR's critics. And it's not the first time he has made up such "rules" for statistics from individual studies that showed some trait that was mistaken for a general property. His grasp of experiment design and statistical modeling suffers, in my opinion, from having been informed by too few examples and too little formal study.

Buddha continues:

Originally Posted by Buddha
Except for the electron emission part of the equipment, it is possible that other equipment parts introduce the bias, which may result in non-Poisson process. To rule out this possibility, the researchers run the device without the subjects being tested, collect the results and use certain statistical methods to determine if results form a Poisson distribution.
This exhibits a fundamental misunderstanding of the design of the REG, which I explained in detail above and which I explained earlier in part 4 of the series. The accident is not departure from the Poisson ideal in the output. The accident is that both the behavior of electrons and the behavior of the variable Jahn derived from it are binary in nature. The machine was designed this way to suit a particular purpose. The principles that arose in its design are not the governing principles for any dependent variable and any underlying process.

The delicate question is how likely it is for an expert in statistics to make this mistake. The unfortunate answer is "Not at all likely." It's the kind of mistake a novice would make who is looking at a few examples and a few isolated principles and trying to formulate a coherent whole out of them in a hurry. Without appropriate guidance, this would indeed result in "rules" that seem to hold for a few examples, but which don't hold generally and don't conform to any theoretical foundation. This is what we see in Buddha's argument.

This is why I think Buddha is so confused about how Jeffers can do statistical analysis on data from a double-slit experiment.
JayUtah is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 9th October 2018, 06:42 AM   #932
Buddha
Thinker
 
Buddha's Avatar
 
Join Date: Jun 2018
Location: New York City
Posts: 249
I would like to respond to several posts at once.

I could draw on my experience as a control systems engineer and talk effusively about various types of control systems. But such recollections of the past glory would bore the audience to death. Instead I want to keep them wide awake because I want to emulate Uri Geller, who claimed that he can bend the spoons placed in front of TV screens. I want to make profits by telekinetically pickpocketing them.

Several opponents noted that I do not respond to certain posts. I have my reasons for ignoring them. So far no one had responded to my posts criticizing section 9 (metal bending section) of Palmer report.

“I am sorry to tell you that you are very wrong in thinking that there is a "vast majority" that see that you provide relevant data. The recent poll showed that not a single one of your readers thought that you have done well.”

I would like to know how that poll was conducted, how its data was collected and how many board members participated in it.

“It is a common mistake for lay people to confuse stress and strain but you wouldn't expect an engineer to confuse the two.”
I really do not understand the meaning of this remark. Hasted has PhD in Experimental Physics, which requires a good understanding of structural mechanics. Are you saying that, because he is not an engineer, he made a mistake?

“Now you may say that your argument is not pure ipse dixit, not based on simple declarations of intellectual superiority, and have been documented all along the way with external references -- which, by the way, you demand that your opponents also provide before you will listen to them. We covered this already; your external references fall into a number of impotent categories. First, in some cases you provide such documented presentations only after one your opponents raises a topic. Then your presentation is little more than undirected and unneeded didactics. What meaning should an audience take away from the timing? Second, in some cases your reference merely defines or mentions a concept that appears in your argument and does not in the least support the argument you have made from it. It is as if your argument is, "See, this is a real concept, therefore what I say about it must be true." This is what happened when you tried to defend your home-grown concept of randomness (i.e., to preclude excising points from a data set). You merely linked to the Wikipedia article and ignored the argument your opponents drew up using that information to show that data must be excisable by the very nature of randomness. In short, you can't show that you understand the references you link to. That leads us to a third category: in some cases you cherry-pick items from a source while ignoring that the context firmly refutes what you plan to use that reference for. This was the case when you tried to document the proper use of the t-test. And finally, when you run across something like your discovery of the power transform, you try to interject it into aspects of the discussion where it clearly doesn't belong.

I will let the audience judge whether my approach works or not. Apparently, it doesn’t work for you, but I am at peace with that.

“Although one can imagine many sources of gross electrostatic or
electromagnetic artifacts, localizing them to a particular specimen is a
different matter. However, as Hasted recognized, it is possible that a
strain gauge could be triggered either by the subject building up an
electrostatic charge in his body and moving a finger, say, close to the
specimen, or by creating dynamic electrostatic induction through gross body
*movements. On the other hand, such potential effects, even if the requisite
movements had escaped visual detection by the experimenter(s), would have
needed to overcome the electrical shielding of specimens routinely applied
in Hasted's later work. Electrostatic effects should also have been picked
*up by the touch detectors Palmer, page 190

Electrical shielding is routinely used for this type of experiments, and Hasted used one, as Palmer noted. Palmer’s criticism missed the mark. Besides, he didn’t identify potential sources of electromagnetic artifacts, which makes his criticism hollow
.
“Another argument against hypotheses based upon localized artifacts is
the frequent occurrence of "synchronous" signals associated with sensors
located up to several feet apart. The problem is that the signals could
conceivably radiate out from the vicinity of one sensor to another, even
over the distances of separation utilized. If the signals were truly
synchronous, this hypothesis might be precluded. However, Hasted's
recording mechanism was not adequate to define synchronicity with the
necessary precision; i.e., the diagnostic equipment was too slow to measure the time it would take for the radiation to propagate. Hasted's
"synchronous" signals can only be considered synchronous in a loose sense of
the term.” Palmer, page 191

It is left to the manufacturer to determine whether the signals in question are synchronous or not in the true sense of the word, Hasted just followed the manufacturer’s instructions when he chose this equipment for his work. Palmer suggests that the manufacturer provided wrong instructions. This is equivalent to saying that the equipment is faulty. If this were true, all other experiments unrelated to metal bending produced incorrect results. However, there were no complaints regarding the equipment, so Palmer’s assessment of the experiment conditions is false.
Buddha is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 9th October 2018, 04:04 PM   #933
JayUtah
Penultimate Amazing
 
JayUtah's Avatar
 
Join Date: Sep 2011
Posts: 16,393
Originally Posted by Buddha View Post
Several opponents noted that I do not respond to certain posts. I have my reasons for ignoring them.
And what would those reasons be? It has been pointed out in no uncertain or incomplete terms that you are grossly, factually wrong on a number of points having to do with your treatment of PEAR, Palmer, Alcock, and Jeffers. You categorically refuse to entertain any discussion about that, even though your supposed expertise was your major premise, and even though your opponents have provided substantial evidence to refute that premise.

Please favor us with these reasons that are so powerful as to let you ignore such a demonstration with no effect upon your argument.

Quote:
So far no one had responded to my posts criticizing section 9 (metal bending section) of Palmer report.
I told you I would do so as soon as we finished with PEAR and its critics. Clearly we are not finished. I don't intend to move on simply because you want to change the subject. And I certainly don't think the truth will be served by doing so.

Further, if your opponents are correct with respect to how you handled PEAR et al., then a major shortcoming in your overall handling of the Palmer paper would be your inability to correctly understand his criticism and the authors he criticized. It would be imprudent to continue with that shortcoming uncorrected, as none of your subsequent misinformed commentary could be trusted to have much value as evidence. It would be even more imprudent for an opponent to continue engage you further if there is unanswered evidence that you will not acknowledge errors he finds in your subsequent comments. One cannot credibly beg the question that one is reasonable; that can only be demonstrated.

Quote:
I will let the audience judge whether my approach works or not.
You are ignoring facts and asking leave to do so without consequence. That approach doesn't work for anyone. The audience has already given you their judgment. Your reaction was to cast aspersion on the clerical methods behind it. That doesn't seem very consistent with saying you're willing to abide their vote.

Quote:
Apparently, it doesn’t work for you, but I am at peace with that.
It doesn't work for an honest examination of your argument. Your peace of mind is irrelevant to whether you are correct or not regarding PEAR, Palmer, Alcock and Jeffers. You claimed PEAR's critics were biased and incompetent, and your attempts to prove that revealed instead that you don't know what you're talking about. You've been corrected at length, several times. Ignoring that correction doesn't make it go away or rehabilitate your failed claim.

You criticize your opponents for not following you on a change of subject, but they have very good reasons for keeping you on the subject we're at. Your approach asks them to simply "agree to disagree" on matters where they have presented you with contrary facts that you will not acknowledge. They have no incentive or obligation to agree to that. Instead they are well within the bounds of civility and reason to attempt to hold you accountable for your initial claims until closure is reached.
JayUtah is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 10th October 2018, 04:23 AM   #934
P.J. Denyer
Illuminator
 
Join Date: Aug 2008
Posts: 4,984
Originally Posted by Buddha View Post
“I am sorry to tell you that you are very wrong in thinking that there is a "vast majority" that see that you provide relevant data. The recent poll showed that not a single one of your readers thought that you have done well.”

I would like to know how that poll was conducted, how its data was collected and how many board members participated in it.

Just look at the thread, it's hardly rocket science and links have been provided previously.

http://www.internationalskeptics.com...d.php?t=331998
__________________
"I know my brain cannot tell me what to think." - Scorpion

"Nebulous means Nebulous" - Adam Hills
P.J. Denyer is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 11th October 2018, 06:07 AM   #935
Buddha
Thinker
 
Buddha's Avatar
 
Join Date: Jun 2018
Location: New York City
Posts: 249
I am going to reply to several posts at once.

“Why is your fixation on Jay relevant?
Because he didn’t provide any data regarding the outliers, as I requested.

. “Believe me, the notion you dropped here about a one-slit experiment energy density plot being applied a power transform and then used to predict a level of confidence has been one of the most amusing moments in this forum's history. I wish all of this thread readers could imagine these two things: that "energetic" Gaussian coloured to show a 95% probability, and the fact that such "Gaussian" should flutter like a flag -and stop being a Gaussian- to show any possible telekinetic effect -without any "power transform" or other out of place method available as an excuse to apply-“.

Please, read the book Options, Stocks & Other Derivatives by Hull. The idea to use power transforms to predict, as you put it, level of confidence is not mine, it is very old; as the book explains, a Nobel Laureate, economist Metron, successfully used it to predict variations in the options prices. Option prices form a non-Gaussian distribution, but when a logarithmic transform (this is one of power transforms) is applied to it, the new distribution is Gaussian. Metron won his Nobel Price for long-term prediction of option prices. Anyway, thank you for giving me an opportunity to show off my erudition.

"If you're measuring electronic current in milliamps, such as in a typical handheld electronic device, the number of electrons passing a fixed point in a circuit in any given second is still on the order of 1015 electrons. Quantum-level effects on the current measurement aren't even slightly significant compared to effects say, from temperature. The spirit of Poisson being a "limiting case" is what happens when we start measuring it at finer and finer time slices and at precisions approaching being able to measure individual electrons.”

Actually, your argument supports my point of view – I wrote that quantum mechanical effects are not important in regular current measurements, so Palmer assertion that Hasted’s current measurements are not precise enough is false.

“And what would those reasons be? It has been pointed out in no uncertain or incomplete terms that you are grossly, factually wrong on a number of points having to do with your treatment of PEAR, Palmer, Alcock, and Jeffers. You categorically refuse to entertain any discussion about that, even though your supposed expertise was your major premise, and even though your opponents have provided substantial evidence to refute that premise.”

You forgot to mention another reason – not all my opponents are engineers, a discussion of purely engineering topics is beyond their self-confidence level. Are you one of them? You could prove me wrong by responding to at least one of my posts regarding Hasted’s research.

“You are ignoring facts and asking leave to do so without consequence. That approach doesn't work for anyone. The audience has already given you their judgment. Your reaction was to cast aspersion on the clerical methods behind it. That doesn't seem very consistent with saying you're willing to abide their vote.”

I will abide their vote if you provide the information that I requested regarding the voting procedure. The absence of this information shows beyond shadow of a doubt that I am right – by far and large the audience rejects your posts.

“(3) Although many of the effects upon which conclusions were based did
not occur consistently, the conclusions were not backed up by the requisite
statistical analyses, a point also stressed by Stokes (1982).
Examples of how these deficiencies contribute to ambiguity in the
interpretation of Hasted-s results will now be given. Perhaps the most
important of these examples concerns the basic nature of the recorded
signals. Although the signals are often referred to in passing as
reflecting strain (i.e., extension, contraction, or bending of the metal
specimen), only rarely is such a conclusion justified. This is true even if
we agree that the signals are not artifactual in origin. Hasted’s recent work has indeed illustrated that many of the signals in that work
WA
seem to be electrical in nature, which raises the possibility that some of
the effects in the earlier work may also have been electrical. Only in
those sessions where the signals were shown to conform to permanent
deformations of the specimen does the case for their representing actual
strain effects appear to be strong (Hasted, 1977). The incapacity to
characterize the remaining signals is attributable in part to the suboptimal
recording techniques mentioned above” Palmer. Page 193

If the signals are not artifactual, as Palmer noted, then what is their origin? The only possible explanation is that these signals were generated by the subjects during metal bending experiments. Any engineer or physicist would agree with me. But Palmer is neither.

Palmer compares old Hasted’s experiment with more recent ones. But the set-ups are different, so this comparison makes no sense, as every engineer and physicist would confirm. It seems strange that the Army folk chose a person with zero engineering background to write this review. The CIA do not make such mistakes, at least I hope so. Their telekinetic research is grounded in reality (recently they have made public some of their ESP research)
Buddha is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 11th October 2018, 01:17 PM   #936
Thor 2
Illuminator
 
Thor 2's Avatar
 
Join Date: May 2016
Location: Brisbane, Aust.
Posts: 4,352
Originally Posted by Buddha View Post

“It is a common mistake for lay people to confuse stress and strain but you wouldn't expect an engineer to confuse the two.”
I really do not understand the meaning of this remark. Hasted has PhD in Experimental Physics, which requires a good understanding of structural mechanics. Are you saying that, because he is not an engineer, he made a mistake?
After sifting through your posts I found the above which I assume is directed at me.

How you can read through the post of yours I quoted and not see the meaning of the remark is beyond belief. You tell us that Hasted has a PhD and therefore we
should accept what he has written as fact, even though he doesn't seem to know the difference between stress and strain? You call yourself an engineer and you can't see this? This is pretty basic stuff.
__________________
Thinking is a faith hazard.
Thor 2 is online now   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 11th October 2018, 05:03 PM   #937
Loss Leader
I would save the receptionist.
Moderator
 
Loss Leader's Avatar
 
Join Date: Jul 2006
Location: Florida
Posts: 26,009
Originally Posted by Buddha View Post
I will abide their vote if you provide the information that I requested regarding the voting procedure. The absence of this information shows beyond shadow of a doubt that I am right – by far and large the audience rejects your posts.

The information is all right there in the thread. One wonders if you aren't deliberately ignoring it. On September 19, a poll was created asking, "Who is winning the 'Is the Telekinesis Real?' Thread?" The creator of the poll clarified "winning" as, "arguments, facts and explanation."

9 people were listed, including you and JayUtah as well as people like DaveRogers and halleyscomet. Two null choices were given - the Hindenberg (It's all going down in flames) and Other.

62 members voted, each able to make one or more choices. Once a person voted, that member was locked out of the poll and could not vote again. Thus, the results are the votes of 62 distinct members casting 104 or 105 votes. Of those, 49 votes were for JayUtah. The next highest vote-getter was aleCcoweN with 16. The two null options together garnered 14 votes. You received only 1 vote, tying Pooneil for the fewest votes.

Assuming that you voted for yourself, that means that not one single person of the other 61 voters believed you were demonstrating "arguments, facts and explanation" better than any other person listed. In fact, JayUtah was 49 times as popular as you as the person most adept at "arguments, facts and explanation."

The only way to lose more convincingly would be to have your name not appear in the poll at all.
__________________
I have the honor to be
Your Obdt. St

L. Leader
Loss Leader is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 11th October 2018, 05:18 PM   #938
theprestige
Penultimate Amazing
 
Join Date: Aug 2007
Posts: 30,951
Buddha, is the telekinesis real?
theprestige is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 11th October 2018, 05:43 PM   #939
JayUtah
Penultimate Amazing
 
JayUtah's Avatar
 
Join Date: Sep 2011
Posts: 16,393
Originally Posted by Buddha View Post
Because [Jay] didn’t provide any data regarding the outliers, as I requested.
That's a bald-faced lie. You were directed several times to Zimbardo's book The Lucifer Effect, but made several vague excuses for not consulting it or his other research. AleCowaN also presented a clear citation from the medical research literature, which you ignored entirely. Further, your own source from a few days ago spoke about the consequences of including outliers in parametric analysis. I asked you specific questions regarding it, which you have not answered. Your allegations regarding outliers have been thoroughly addressed.

In the larger sense, there is no quid pro quo. You are not excused from addressing facts that dispute your claims simply because you think your opponents haven't jumped through some hoop. The facts are what they are, regardless of what your opponents do or say.

Quote:
The idea to use power transforms...
The power transform has absolutely nothing to do with the egregious errors of comprehension you've committed regarding Jeffers and the diffraction experiments. You don't know what the dependent variables are in those experiments. That isn't fixed by handwaving references to the power transform.

Quote:
Actually, your argument supports my point of view – I wrote that quantum mechanical effects are not important in regular current measurements, so Palmer assertion that Hasted’s current measurements are not precise enough is false.
Irrelevant, as we're talking about Jahn, not Hasted. You wrongly suppose the Poisson distribution that formalizes quantium-level fluctuation in electrical current is what Jahn used as the dependent variable in the PEAR study. I have explained at length how wrong you are about that. Do not try to pivot my explanation to apply to some other experiment. You don't understand how Jahn derived his statistical model, and I have presented and painstakingly support a hypothesis that argues it was a beginner's mistake on your part that led you to your wrong conclusion.

That is what you were supposed to address. We're not moving on to Hasted or others until we've reached closure on PEAR and Palmer. Keep in mind you chose PEAR and primed us to expect an argument for it before you even started in the thread. You don't get to abandon it just because it's gone badly for you.

Quote:
You forgot to mention another reason – not all my opponents are engineers...
Irrelevant, as your errors regarding PEAR, Jeffers, and Palmer are not engineering matters. They are questions of statistical analysis and experiment design. Further -- insofar as it matters -- you have no idea what the engineering qualifications are of your opponents. You're simply casting aspersions based on an argument from silence.

Regarding PEAR and its critics, you claimed to be an expert statistician and upon that basis you claimed you could say that Jeffers and Palmer were biased and incompetent critics of Robert Jahn and PEAR, and that because of those factors their criticism should be dismissed and Jahn's findings should be upheld. I have presented facts that thoroughly undermine every aspect of that argument.

Quote:
I will abide their vote if you provide the information that I requested regarding the voting procedure.
Non sequitur. The vote regarding your credibility has nothing to do with your factual errors. I have made you aware of facts -- in some cases from your own sources -- that thoroughly dispute the basis by which you reject Palmer and Jeffers. If you are uninterested in redeeming your argument in light of them, then you simply lose the debate.

Further, you voted in the poll yourself. It's a little untimely for you now to question how it worked.

Quote:
The absence of this information shows beyond shadow of a doubt that I am right – by far and large the audience rejects your posts.
Really? An argument from silence coupled with a veiled, unevidenced insinuation of vote-tampering gets you affrimatively beyond the "shadow of a doubt?" I don't know if I'm allowed to express under moderation how much I'm laughing at that, but I am.

You have provided no evidence that the audience "rejects my posts." I, on the other hand, can point to several actual endorsements that approve and agree with what I've written. I am unconcerned with the vote, <snip> I am instead concerned with your unwillingness to face up to the facts that dispute your claim to be an expert statistician. Please address those without further delay or distraction.

Last edited by Loss Leader; 11th October 2018 at 07:18 PM. Reason: Moderated Thread
JayUtah is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 11th October 2018, 05:45 PM   #940
JayUtah
Penultimate Amazing
 
JayUtah's Avatar
 
Join Date: Sep 2011
Posts: 16,393
Originally Posted by Thor 2 View Post
This is pretty basic stuff.
It is literally the first day of Mechanics of Materials.
JayUtah is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 12th October 2018, 02:42 AM   #941
aleCcowaN
imperfecto del subjuntivo
 
aleCcowaN's Avatar
 
Join Date: Jul 2009
Location: stranded at Buenos Aires, a city that, like NYC or Paris, has so little to offer...
Posts: 9,432
I suggest everybody not to embrace the example set in this thread from its very inception: Though I don't know the terminology in English (and the BE/AE inevitable differences), the strain is both/either the macroscopic deformation that comes from stress and/or its infinitesimal expression within the same state.

Though I know Wikiwad is not any of an authoritative source (as it's mostly used by cats to feign fictitious knowledge on the fly, in venues like web fora), I managed to find these departing from concepts well known to me in Spanish (and in one case, just by jumping from the Spanish Wikiwad version):

https://en.wikipedia.org/wiki/Stress...93strain_curve
https://en.wikipedia.org/wiki/Infini..._strain_theory
https://en.wikipedia.org/wiki/Strain_rate_tensor

Within this subject, strain seems to alternatively match Spanish esfuerzo and deformación depending on the context. And certainly idiotic Hasted was able to observe twisted metal and theorize some strain (as the perverts handling those pieces behind his back did to fool him) was present, what he and Palmer reported or commented on.

Forum user "Buddha" usually transcribes text well. That, I concede.
__________________
Horrible dipsomaniacs and other addicts, be gone and get treated, or covfefe your soul!These fora are full of scientists and specialists. Most of them turn back to pumpkins the second they log out.
I got tired of the actual schizophrenics that are taking hold part of the forum and decided to do something about it.
aleCcowaN is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 12th October 2018, 03:43 AM   #942
P.J. Denyer
Illuminator
 
Join Date: Aug 2008
Posts: 4,984
Originally Posted by Buddha View Post
I will abide their vote if you provide the information that I requested regarding the voting procedure. The absence of this information shows beyond shadow of a doubt that I am right – by far and large the audience rejects your posts.
Your apparent confusion about the workings of a very simple, self explanatory, and public poll certainly does not make you correct; although it is an instructive demonstration of your process if we didn't have enough examples already. Your failure to understand a contrary position does not make the one you hold correct.

There's nothing to 'abide' to in the poll anyway, it is simply informative, but it does puncture your pretension to the support of a silent majority.
__________________
"I know my brain cannot tell me what to think." - Scorpion

"Nebulous means Nebulous" - Adam Hills
P.J. Denyer is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 12th October 2018, 07:28 AM   #943
Buddha
Thinker
 
Buddha's Avatar
 
Join Date: Jun 2018
Location: New York City
Posts: 249
Hasted's conclusions regarding the "surface of action" are also
problematic, even if one accepts the loose definition of "synchronous." This
concept is based on observations of data from North and Williams that
synchronous signals are more prevalent when the specimens are in a
radial-vertical configuration with respect to the subject than in some other
configuration. However, no statistical analyses were offered to support the
significance of this trend. In the case of Williams' data (Hasted, 1977), of 15 (80%) signals in the vertical or radial-horizontal-vertical
configurations were synchronous as compared to 22 of 39 (56%) with the other
configurations. This difference is associated with a corrected chi-square
value of 1.67, which with one degree of freedom is clearly nonsignificant”. Palmer. Page 193

It seems to me Palmer contradicts himself – at first he says the synchronous signals are more prevalent when the “specimens are in a
radial-vertical configuration with respect to the subject”, then he says that the difference is statistically insignificant. As I noted before, in the experiments that this one position of a specimen is of no importance, so the difference is statistically insignificant, as expected. But even if the difference were statistically significant, this wouldn’t mean much either because it be could attributed to the piezoelectric effect that depends on direction of the stress applied to the specimen. But it appears that Palmer didn’t consider this simple explanation of the signal difference.

Although the surface of action is presented as a basic physical
characteristic of the phenomenon, it could just as easily be a reflection of
a possible psychological preference of North and Williams; there is
certainly no basis for drawing conclusions about the generality of the
surface of action”. Palmer, page 193

Psychological preference of what? Palmer didn’t elaborate because his position is completely illogical – the metal bar is either under a stress caused by the subject or not, the subject’s “psychological preference” is completely irrelevant, it doesn’t affect the result.
*.
Buddha is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 12th October 2018, 11:02 AM   #944
JayUtah
Penultimate Amazing
 
JayUtah's Avatar
 
Join Date: Sep 2011
Posts: 16,393
Originally Posted by JayUtah View Post
Let's transform the rest of this discussion into a quiz. (I will ask the indulgence of the other contributors to give Buddha a fair chance to research and respond to these before jumping in themselves.)
  1. Is the Jahn REG experiment an example of in-group experiments or between-group experiments?
  2. Randomization is a technique that applies to which of the two experiment classes in the previous question?
  3. What does Dr. R-S say is one possible consequence of applying parametric analysis techniques to data sets with extreme outliers?
  4. What alternatives to randomization does Dr. R-S describe?
  5. On what type of data in Jahn's REG experiment would the chi-square statistic be appropriate?

These are not mere gotcha! questions. These are issues brought up in the video that relate to specific errors you have committed in this thread while attempting to discredit Dr. John Palmer and Dr. Stanley Jeffers.
Buddha is excusing his silence on his failed PEAR claims by saying his opponents haven't produced this or that specific bit of information. He says they haven't described how the poll was taken that disfavors him. Not true, but largely irrelevant since at best that supports only an appeal to the gallery. The gallery has spoken but that's not why we're here. He says they haven't substantiated from the literature the validity Dr. John Palmer's exercise of omitting the data from Operator 010. Also not true, but there are only so many references I can make to Philip Zimbardo's The Lucifer Effect before the reasonable conclusion is that Buddha will never acknowledge or read it. Zimbardo goes on for several chapters recollecting the infamous Stanford prison experiment, and speaks at length about his outlying guard subject. But according to Buddha, none of his opponents has met his challenge to disregarding Operator 010.

The quiz I quoted above was to determine how much Buddha consulted his own sources when dealing with topics like outlying data. He's had time to answer the quiz, but chose to try to wrench the discussion away from PEAR and toward his change-of-subject distractions. So what does his source say?
  1. Is the Jahn REG experiment an example of in-group experiments or between-group experiments?
    It is an example of an in-group experiment design. As Dr. R-S explains, a between-group design is when the subject pool is divided into a control group and a variable group. A dependent variable is expected to vary between the group according to the experimenter's manipulation of an independent variable. This is the common approach in drug trials, where the independent variable is whether one receives the drug or a placebo. In contrast, the in-group design treats all the subjects as one group and measures variance across some other variable. This is the design for exploratory studies which simply seek to measure some property of a group. It also works for longitudinal studies, where the independent variable is time. In Jahn's studies -- as well as Jeffers' -- the subjects were all told to attempt to vary the outcome of a random event. The independent variables were all concerned with whether they would be asked to exert an influence on some given run and toward what variance. The dependent variable was whether the mean number of successes of the group as a whole varied accordingly from no-effort to effort runs.

    The Stanford prison experiment, in contrast, is a between-group experiment. It collected a pool of subjects roughly distributed similarly to the population by the measures that Dr. Zimbardo felt would affect the emergence of the properties he hoped to measure. Then they were randomly divided into guard or prisoner groups, and the experiment was run. His subsequent analysis compared the psychometric data collected after the experiment.

  2. Randomization is a technique that applies to which of the two experiment classes in the previous question?
    Randomness applies only to between-group experiments. Buddha's source confirms what I explained earlier. In a between-group experiment, the least biased method of dividing the subjects into the control and variable groups is by random assignment, such as Zimbardo achieved via the coin-toss. The homogeneity of each group can be measured, and this drives the degrees of freedom in the subsequent parametric comparison of groups, such as via the t-test.

    Since Jahn's subjects were not divided into control and variable groups, there is no randomness constraint to violate if a subject is disregarded. It was an in-group experiment. Jahn's single-slit experiment used a similar design. The subjects were not divided into control and variable groups. It was also an in-group experiment. Rather the control-vs-variable was whether effort was directed to be applied in some given run, as it was in Jahn's experiment. In Jeffers' double-slit experiment there were two groups of subjects, but it was not a between-groups experiment. However, a measure of between-group analysis was possible, with the independent variable being the means-directed versus outcome-directed categorical variable. Since no significant variance was observed in either group, it would have been moot to try to determine whether any variance could have been attributed to that variable.

  3. What does Dr. R-S say is one possible consequence of applying parametric analysis techniques to data sets with extreme outliers?
    She says it will bias any determination of variance. This is exactly what Dr. Palmer observed in the Jahn experiment, so Buddha's source expressly confirms Dr. Palmer's analysis. Dr. R-S says that parametric analysis (the kind used by both Jahn and Jeffers) engenders certain assumptions that are violated by outlying data. One of those assumptions is that the data are normally distributed. Extreme outliers violate the normal-distribution assumption, and Operator 010 -- who singly accounted for all the significance in the entire experiment, and whose performance exceeded by a factor that of all other subjects combined -- is clearly an extreme outlier by Dr. R-S's definition.

    Buddha does not know what data in the Jahn experiment were actually looked at. As stated previously at length, he wrongly thinks the Poisson-governed behavior among small numbers of electrons is what Jahn applied the t-test to. Instead it was the mean number of 1-bits observed over a 200-sample run of his REG, aggregated over several runs taken over many months by a variety of subjects. When these data are observed on a per-subject basis instead of aggregated all together, they must be normally distributed for any of Jahn's tests to be of value. That is, the number of subjects that achieve a certain number of mean 1-bit observations in the no-effort case should be normally distributed around μ=100. The number of subjects that achieve certain PK+ or PK- scores should be normally distributed around a score that is either higher or lower than 100, such as μ=101 for the PK+ case and μ=99 for the PK- case. Instead, Palmer noted that the disaggregated data reported by Jahn showed a normal distribution around μ=100 in the effort case, with Operator 010 a clear outlier in both the PK+ and PK- cases.

    According to Buddha's source, this violates the expectation of normal distribution. Parametric analysis must remove the outlier before it can achieve any explanatory power.

    Buddha tried to get around this by saying the expectation of a normal distribution is a straw man, that no such expectation arose. This is, in fact, the standard rehabilitation of PEAR among proponents of psychokinesis. However, it was Jahn's choice to apply the t-test. That carries with it an expectation of normal distribution. If he did not expect the subjects' performance in the effort case to be normally distributed, he would not have selected the t-test. Again, because Buddha does not know what data were actually used in the t-test, he does not see the problem with his speculative dismissal of Palmer.

    Also included in Dr. R-S's explanation of the assumptions that must hold before parametric analysis is appropriate is the basis of Jeffer's baseline-bind criticism, which Buddha has assiduously let pass in silence. The anomalies in the baseline (i.e., no-effort) runs violate the distribution assumptions just as certainly as outliers, except for exactly the opposite reason. The no-effort results are not as varied as they should be, and Jahn himself notes this. Jeffers, in Skeptical Inquirer, merely notes the proper interpretation that should attach to such a confession.

    Since Buddha was unable to answer this question, I submit the hypothesis that he has doggedly avoided Jeffers' explanation of the baseline consequences because he cannot discuss it knowledgeably and is consequently not an expert in statistics. This undermines his opinion of the impartiality and competence of Palmer's and Jeffers' individual criticism, which was offered as that of an expert. Notably, his own source provides the answer he lacked.


  4. What alternatives to randomization does Dr. R-S describe?
    Explicit homogenization, if the variables that may confound are well-known and well-behaved. This speaks partially to Buddha's attempted counterexample, wherein he was told that he could not remove a miraculously non-cancerous subject from the group to determine what extracurricular factor had saved him. It would violated randomization, and therefore it couldn't be allowed in Palmer's treatment of Jahn. Randomization doesn't apply to Jahn's in-group experiment, but what Buddha proposed was to remove a subject based on a presumption of independent variable. He noted the outlying case as an uncommon variance in the dependent variable, but proposed that it be eliminated because some independent variable was the cause. Preferentially removing a subject according to presumptions of independent variables expressly introduces bias, but that was not the grounds by which Palmer proposed to disregard Operator 010. Nor is it the grounds on which subjects are routinely excused from ongoing research, as AleCcowaN's reference established. Such invariant reasons include non-adherence to protocol, voluntary withdrawal (a legal requirement of human-subjects research conducted in the United States), and the death of the subject. These do not violate randomness because they are factors assumed to affect both groups equally randomly, and not tied to any controllable independent variable. Zimbardo's proposal to exclude his rogue guard was on the grounds of violating the experiment protocol which, we see in the literature, is a perfectly legitimate reason having nothing to do with randomization.

    Dr. R-S notes that it is uncommon to attempt such homogenization, but it is possible and, where successful, suitably rigorous.

  5. On what type of data in Jahn's REG experiment would the chi-square statistic be appropriate?
    Several, for example the direction of effort (i.e., PK+ versus PK-) as a categorical variable, and the problematic volitional variable -- whether the subject got to choose whether to exert and effort, and in what direction. The chi-square test is non-parametric, meaning it does not require its data to be normally-distributed. The PK+ and PK- data can be considered as two kinds of data. Certainly there is the aggregation of means, which is the parametric aspect of it. But there is also a categorical variable in simply whether an effort was made -- "PK-some-direction versus no-effort." And another, as stated, in the direction of effort -- "PK+ versus PK-."

    Buddha had great difficulty with the concept of categorical variables, which is puzzling coming from someone who claims expertise in statistics and who decries Palmer's correct handling of them as incompetent. Buddha wrongly thought that the encoding for a categorical variable had to be treated parametrically in some way as continuous or ordinal data, and that it had to be distributed in a certain way in order for Palmer's analysis to hold. In other words, Buddha was trying to shoehorn non-parametric analysis into the rules for parametric analysis. This is simply not something someone would do who has studied inferential statistics at even the most elementary level. It's a rookie mistake.

    Dr. R-S mentions Buddha's error as a special case to the ordinal variable and gave an example of one from her field (education) whose ordinal encoding is contrived such that it can be treated as a continuous variable. She is careful to mention that this is not licit except in these rare and contrived cases.

    Non-parametric analysis simply means the data are not expected to fall into the distributions suggested by our parameterized mathematical constructs for distribution -- Gaussian, Poisson, binomial, etc. They may exist on some sort of continuum (e.g., for ordinal values). Or they may simply be categories that imply no magnitude or order, such as sex or whether one was supposed to push the REG to a higher or lower number of 1-bits.

    The chi-square test determines how independent two variables are, whether they vary together or vary separately. Variance in a categorical variable is simply whether one data point is in a different category than another. The in-group analysis noted that volition was not independent from the effect. That is, variance in 1-bit means under effort was observed only when the subjects got to choose how they attempted to affect the machine. Volition is a categorical variable; it is not distributed across an ordered sequence of potential outcomes or a quantitative support. It is distributed merely into yes-or-no bins. Trying to shoehorn it into parametric analysis would be a cargo-cult mistake.

It looks like Buddha's own sources do a pretty good job of refuting his claims. It's not like we have to cite many, if any, sources of our own.
JayUtah is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 12th October 2018, 04:29 PM   #945
JayUtah
Penultimate Amazing
 
JayUtah's Avatar
 
Join Date: Sep 2011
Posts: 16,393
Originally Posted by Buddha View Post
Hasted's...
No, Buddha, we're not finished with PEAR.
JayUtah is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 13th October 2018, 09:04 AM   #946
Darat
Lackey
Administrator
 
Darat's Avatar
 
Join Date: Aug 2001
Location: South East, UK
Posts: 84,535
Mod WarningAfter a quick review I'm going to say discussing the Poll thread in community is off-topic for this thread, it has nothing to do with the actual science topics this thread is meant to be discussing. So drop the discussion about that thread.
Posted By:Darat
__________________
I wish I knew how to quit you
Darat is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 19th October 2018, 08:13 AM   #947
halleyscomet
Philosopher
 
halleyscomet's Avatar
 
Join Date: Dec 2012
Posts: 9,140
Is there any evidence supporting the existence of telekinetic powers of any kind?

So far this thread has discussed, at great length, the PEAR study, which was so bad even it's original author collaborated on a subsequent study that eviscerated it. The "defenses" offered of PEAR in these pages have been a profound and complete joke.

There was a metal-bending test, but it was so poorly designed that it would have been trivial to doctor the results or cheat the actual tests.

Is there ANYTHING that stands up to scrutiny?
__________________
Look what I found! There's this whole web site full of skeptics that spun off from the James Randy Education Foundation!
halleyscomet is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 20th October 2018, 04:55 AM   #948
Darat
Lackey
Administrator
 
Darat's Avatar
 
Join Date: Aug 2001
Location: South East, UK
Posts: 84,535
You always have to take it back to the original claim. People believed telekinesis was real because they saw people apparently moving or manipulating macro objects with their minds, moving balls on a table, making metal bend and so on. The type of experiment PEAR did was not looking at what was being claimed, it was looking for an effect they could shoehorn into the word "telekinesis" - there was no phenomenon that people believed in or there was anecdotal evidence for that PEAR was set up to find.

It really was a matter of hoping to find something no matter what that meant something was happening that couldn't be explained by "regular" science.

Even if they had found something that couldn't be explained by current science there would have been no reason to believe it was even linked to the telekinesis that people believe/d existed.
__________________
I wish I knew how to quit you
Darat is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 20th October 2018, 05:29 AM   #949
aleCcowaN
imperfecto del subjuntivo
 
aleCcowaN's Avatar
 
Join Date: Jul 2009
Location: stranded at Buenos Aires, a city that, like NYC or Paris, has so little to offer...
Posts: 9,432
Originally Posted by Darat View Post
You always have to take it back to the original claim. People believed telekinesis was real because they saw people apparently moving or manipulating macro objects with their minds, moving balls on a table, making metal bend and so on.

Don't forget to add levitation to that, as moving other bodies, including one's own, is part of this family of bull. And it's rooted way back in time, in those oriental beliefs that "pure/advanced souls" can float

We "might" find that to be the motivation behind this thread (and profitable pseudo-scientific books can be written about it). Excerpts from Google:

Quote:
Reiki, Yoga, Meditation and Yagyas:New Age Practices: Techniques for ...

https://books.google.com.ar/books?isbn=1413483879

Marc Edwards - 2005 - ‎Body, Mind & Spirit
Jesus, Buddha, Krishna, and other advanced souls were on extremely high ... however, levitation will happen with significantly large number of meditators in the ...


Attaining the Siddhis: A Guide to the 25 Yogic Superpowers

https://www.consciouslifestylemag.com/siddhis-attain-yoga-powers/
25 Superhuman Powers You Can Gain Through Practicing Yoga and Meditation ... “In Buddhism, these are not miracles in the sense of being supernatural events, any ... The more advanced siddhis are said to include invisibility, levitation,


Is levitation an illusion or is it real? If is it real, can it be ...

https://www.quora.com/Is-levitation-an-illusion-or-is-it-real-If-is-it-real-can-it-be-reall...
Oct 8, 2014 - Magnetic levitation is the most commonly seen and used form of levitation. ... In reality its can be achievable, in ancient Hindu and Buddhism scripture but for ... there are so many process for your soul purification and connect to GOD. ... Levitation, it is said is a side effect of advanced pranayama (yoga breathing exercises).
__________________
Horrible dipsomaniacs and other addicts, be gone and get treated, or covfefe your soul!These fora are full of scientists and specialists. Most of them turn back to pumpkins the second they log out.
I got tired of the actual schizophrenics that are taking hold part of the forum and decided to do something about it.
aleCcowaN is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 20th October 2018, 10:29 AM   #950
steenkh
Philosopher
 
steenkh's Avatar
 
Join Date: Aug 2002
Location: Denmark
Posts: 5,426
I have always wondered at certain claims of telekinesis like the one in PEAR, how they suppose that it should work, even if it works. How on Earth is the brain going to influence a random number generator when the brain is not able to fathom the quantum workings of a random number generator, let alone a pseudo-random number generator that cannot be influenced. Or the pattern of light in a double slit diffraction experiment.
__________________
Steen

--
Jack of all trades - master of none!
steenkh is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 20th October 2018, 04:09 PM   #951
aleCcowaN
imperfecto del subjuntivo
 
aleCcowaN's Avatar
 
Join Date: Jul 2009
Location: stranded at Buenos Aires, a city that, like NYC or Paris, has so little to offer...
Posts: 9,432
Originally Posted by steenkh View Post
How on Earth is the brain going to influence a random number generator when the brain is not able to fathom the quantum workings of a random number generator, let alone a pseudo-random number generator that cannot be influenced.
It can be argued that the brain itself that cannot fathom those quantum phenomena, it cannot fathom its own internal workings, yet it works!

The matter behind all this wishful thinking about telekinesis and other niceties are things like "the power of will", "the power of holiness" or "the power of perfection". The fallacious step here is thinking that the same way that will is certainly generated by the black box of our brain, it goes the other way around and that will alone would find a quick way to influence every kind of natural law that is making other black boxes do their things.

In the end is not that far of thinking that last minute repent will produce an eternity in the heaven of the blessed. The only problem is the elusive evidence.
__________________
Horrible dipsomaniacs and other addicts, be gone and get treated, or covfefe your soul!These fora are full of scientists and specialists. Most of them turn back to pumpkins the second they log out.
I got tired of the actual schizophrenics that are taking hold part of the forum and decided to do something about it.
aleCcowaN is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 22nd October 2018, 06:22 AM   #952
halleyscomet
Philosopher
 
halleyscomet's Avatar
 
Join Date: Dec 2012
Posts: 9,140
Originally Posted by steenkh View Post
I have always wondered at certain claims of telekinesis like the one in PEAR, how they suppose that it should work, even if it works. How on Earth is the brain going to influence a random number generator when the brain is not able to fathom the quantum workings of a random number generator, let alone a pseudo-random number generator that cannot be influenced. Or the pattern of light in a double slit diffraction experiment.
You're confusing the quest for actual evidence with the crafting of pseudo-evidence. The results from random number generators and double-slit experiments are easily manipulated by dodgy statistics and compromised baselines. This very thread is full of examples of this. To fake spoon bending however you need to do something as obvious as having the spoons bend while not being observed.

It's easier to hide deceit in obscure phenomena. The legitimacy of a spoon-bending test can be easily challenged by repeating the test with more than one impartial observer watching the spoons when they're supposed to bend. The PEAR study however can be "defended" by spinning tall tales about statistics that are good enough to fool the credulous, so long as they have a layman's understanding of statistics.
__________________
Look what I found! There's this whole web site full of skeptics that spun off from the James Randy Education Foundation!
halleyscomet is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 22nd October 2018, 09:14 AM   #953
JayUtah
Penultimate Amazing
 
JayUtah's Avatar
 
Join Date: Sep 2011
Posts: 16,393
Originally Posted by steenkh View Post
I have always wondered at certain claims of telekinesis like the one in PEAR, how they suppose that it should work, even if it works. How on Earth is the brain going to influence a random number generator when the brain is not able to fathom the quantum workings of a random number generator...
The first two pages of the Jeffers and Sloan paper (1992, the single-slit experiment) survey that question.

Keep in mind that quantum mechanics says that particles don't really exist in the traditional way, but exist simultaneously in a number of states (i.e., position, velocity, etc.) until they are observed. The observation per se causes them to appear in a fixed state, and the state in which they are observed is governed by a probability distributed among all possible states until the "collapse." Stanley Jeffers points out that a few early physicists entertained the hypothesis that observation was a conscious act, and speculated whether the state of consciousness in the observer had some effect on the way the wave function (i.e., the mathematical description of all those simultaneous states) collapsed.

This is a smidgen disingenuous, because none of those theories really caught on, were validated empirically, or persisted much past the 1950s. Jeffers postures the single-slit experiment as such such an attempt at empirical proof -- which, of course, failed. Less adventurous physics maintains that quantum observation requires no special property in the observer and exerts no variable effect on the wave function. And Jeffers' results are consistent with this.

To answer what I think is your real question, I gather that while the purported effect may be conscious, it may not be cognitive. That is, you don't need to know the intimate realities of fluid dynamics in order to breathe, to affect your breathing, or to cause your breath to have effects on the outside world. Instructions such as "Shift the diffraction pattern to the left," or "Make more ones than zeros come out of the machine," weren't intended to require the operator to know how the apparatus worked at any scope of examination. It's closer, I think, to flying by thinking happy thoughts. Your consciousness is presumed simply to preferentially collapse wave functions without a lot of detailed planning.

Was that the question you were wondering about?

Originally Posted by aleCcowaN View Post
Don't forget to add levitation to that, as moving other bodies, including one's own, is part of this family of bull. And it's rooted way back in time, in those oriental beliefs that "pure/advanced souls" can float...
Yes, let's not forget that this thread seems to be one in a loosely-related series attempting to provide scientifically addressable proof for tenets of Buddhism, or some similar belief system that incorporates elements of Buddhism. And in Buddhism macro-level psychokinesis is a thing. In many of the dharmic religions, degrees of enlightenment are associated with supernatural mind-over-matter ability. Of course anyone familiar with stage magic knows how the swami really levitates, and how the spoons really bend. But there is a movement in all religions, I think, that wants to argue that the supernatural claims have some secular justification or validity.

I agree with Darat :—

Originally Posted by Darat View Post
The type of experiment PEAR did ... was looking for an effect they could shoehorn into the word "telekinesis" -...

It really was a matter of hoping to find something no matter what that meant something was happening that couldn't be explained by "regular" science.
It's all about getting a foot in the door. If you can show that a quantum-level PK effect exists, then skeptics are wrong in principle -- an important rhetorical victory -- and the rest is just a matter of scale or degree. It could then be said that ordinary people can manipulate matter by forcing wave-function collapses to be non-stochastic on the order of a few particles, but then more enlightened folk could do that on a grander scale because their consciousness just had that much more horsepower.

But of course those claimed macro-psychokinetic effects have never been demonstrated under rigorous empirical control, and those who purport macro-scale ability eschew the rigor and complain about it. This leads the critical thinker to conclude that the macro effects are more likely to be the obvious sorts of stage magic which the actors know would be revealed by the proposed controls, and which the observers have seen revealed to them by their magician friends. The world is right to be skeptical of claims to supernatural ability that work only when conditions are just right.
JayUtah is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 23rd October 2018, 02:40 PM   #954
steenkh
Philosopher
 
steenkh's Avatar
 
Join Date: Aug 2002
Location: Denmark
Posts: 5,426
Thanks. I think you all answered my question very well, and from different angles.
__________________
Steen

--
Jack of all trades - master of none!
steenkh is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 25th October 2018, 03:54 PM   #955
JayUtah
Penultimate Amazing
 
JayUtah's Avatar
 
Join Date: Sep 2011
Posts: 16,393
Originally Posted by halleyscomet View Post
You're confusing the quest for actual evidence with the crafting of pseudo-evidence. The results from random number generators and double-slit experiments are easily manipulated by dodgy statistics and compromised baselines. This very thread is full of examples of this. To fake spoon bending however you need to do something as obvious as having the spoons bend while not being observed.

It's easier to hide deceit in obscure phenomena. The legitimacy of a spoon-bending test can be easily challenged by repeating the test with more than one impartial observer watching the spoons when they're supposed to bend. The PEAR study however can be "defended" by spinning tall tales about statistics that are good enough to fool the credulous, so long as they have a layman's understanding of statistics.
It's important to realize that dodgy statistics come in various flavors. Broadly speaking, you can misuse statistics by applying them where it doesn't belong. Or you can misuse statistics to hide illicit manipulation of or unfortunate accidents in the data. It's important because some kinds of science necessarily rely on statistical methods to arrive at their findings. You don't want to say that such-and-such a study was invalid because it developed its conclusion statistically rather than by direct observation. In the hypothetico-deductive method, statistical reasoning is a perfectly defensible form of deduction. It has a well-defined and well-behaved calculus.

Jabba's proof for immortality is a good example of applying statistics where they don't belong. As the statisticians he consulted told him, his primary error was that he didn't have any actual data. Statisticians like data. To correct that problem, he simply made up all his data. He pulled numbers out of his kiester and applied poorly-understood statistical inference to them to reach an assurance of success. It's akin to a magician standing on the front of the stage and, with a chalkboard, showing -- with reasonable assumptions -- a 98.36% probability that he really did saw the lady in half. Just show us the trick, ya two-bit Houdini!

Where PEAR is concerned, there's one level of criticism that decries looking for effects so tiny that they're fundamentally indistinguishable from noise. (It's psychokinesis, Jim, but not as we know it.) Let's grant that the effect they're looking for is hypothesized to be very small (however boring that is). Then they must use statistical methods to detect it. The next level of criticism is whether they did so correctly. That's the second category I mentioned -- the case where statistics is the right thing to use, but it's used incorrectly or deceptively.

The impropriety in PEAR's case starts with aggregation of poorly-distributed data. This is the extreme outlier problem. Operator 010 clearly does not fit the expected distribution and, in fact, accounts for all the variance noted between the effect and no-effect runs. Hiding a prodigious data point like that behind a skewed mean is dodgy statistics. The impropriety continues with the suspiciously correlated baselines. It doesn't really matter how they got that way. The problem is that the statistical comparison offered by the t-test shows actual significance only if the baselines are credibly distributed, which wasn't the case for PEAR.

Yes, that sort of thing is much easier to hide from the average observer than whether you actually sawed the lady in half, bent the spoon, or are hovering above the sidewalk. The concept of an outlier is intuitive, but the reader has to know one was there. When the data are presented only in aggregate form, the reader is likely to assume they are appropriately distributed. The concept of a baseline bind is instead far more esoteric. It requires adept knowledge of how parametric distributions behave at a deeply conceptual level. It is manifest only in dimensionless numbers that acquire significance only when read with years of experience.

But there is something about the baselines even Stanley Jeffers didn't write about because it happened after he finished his involvement with PEAR and psi research. The baselines in the last data sets added to the PEAR database were as expected. That is, the problem with the baselines magically corrected itself after the scholarly community expressed concern over it. That has two implications -- one fairly practical and the other a bit sinister. In practical terms, you can't aggregate baselines from the beginning of the project (the too-narrow ones) along with more properly distributed baselines from the end of the project. They're dissimilar data. In sinister terms, how about that timing?

Statistical reasoning is problematic even when it's not intentionally dodgy. PK effects barely poking their heads above the ether are unconvincing not only because they don't relate to what has commonly been peddled as psychokinesis, but also because the smaller the observed effect, the more exacting and uncompromising must be the experiment design in order to make the attribution of such an effect credible to a purported cause. The fact that only the first round of the PEAR protocol produced any significant result, and that nothing thereafter -- even those that followed the same protocol -- found anything at all is best explained as a confound particular to that time and place.
JayUtah is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 25th October 2018, 04:02 PM   #956
Spektator
Watching . . . always watching.
 
Spektator's Avatar
 
Join Date: Jun 2002
Location: Southeastern USA
Posts: 1,642
Originally Posted by JayUtah View Post
(Snip)

It's all about getting a foot in the door. If you can show that a quantum-level PK effect exists, then skeptics are wrong in principle -- an important rhetorical victory -- and the rest is just a matter of scale or degree. (Snip)
In other words, the proponents are saying, "If I have my foot in the door, you have to agree I can float a foot off the floor." Not that they will demonstrate that, but skeptics would have to agree that the supposed miniscule effect proves the macroscopic one. That's fallacious too, of course.
Spektator is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Reply

International Skeptics Forum » General Topics » General Skepticism and The Paranormal

Bookmarks

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump


All times are GMT -7. The time now is 10:16 PM.
Powered by vBulletin. Copyright ©2000 - 2019, Jelsoft Enterprises Ltd.

This forum began as part of the James Randi Education Foundation (JREF). However, the forum now exists as
an independent entity with no affiliation with or endorsement by the JREF, including the section in reference to "JREF" topics.

Disclaimer: Messages posted in the Forum are solely the opinion of their authors.