Originally Posted by
Robert Prey
A "dismissal" without proof via replication is an opinion based on supposition -- empty hot air. Non-Science. This is supposed to be forum of thinkers who base opinions on rationality -- the Scientific Method, of which testing must follow any theory and replication must follow any test.
I seem to recall in my scientific training that the professors kept mentioning, "You are required to subscribe and defer to any layman's notion of what reasonable replication entails, and you must acquiesce to any of their uninformed demands, no matter what I teach you here."
Oh right, that never happened. Every conspiracy theorist thinks he's a scientist and every conspiracy theorist tries to beat his critics over the head with "The Scientific Method," which is invariably a hodge-podge of layman's misconceptions aimed at trying to shift the burden of proof and arbitrarily raise the bar for his critics. You have no idea what the scientific method is or how it applies to this question.
Further, you've steadfastly resisted each and every request that you replicate any of your claims or perform any sort of demonstration to show that your expectations have any basis in reality. More so than anyone, you are simply
talking and not doing.
I'll be very surprised if Robert has any sort of meaningful response to what I'm about to say. But I write it anyway because other people clearly learn from what I talk about, even if it's lost on conspiracy theorists.
"Replication" is not the
sine qua non of the scientific method.
Reproducibility is. The difference between replication and reproducibility is night-and-day.
Replication is simply obtaining the same outcome multiple times, thus reducing the likelihood that the outcome is due to random chance or uncontrolled variables. Reproducibility is the
property of an experiment in which all affective variables are exposed and controlled for. The latter ensures that the description of the hypothesis, model, experiment, and outcome is reasonably complete and correct, the test for which being that a competent researcher following only the description provided by the scientist
could construct an equivalent experiment. Whether he goes on to do so is irrelevant; the property of reproducibility is conceptual. Replication is often considered a suitable test of reproducibility, but only upon failure; false-positive replication for an irreproducible experiment does not validate the experiment.
Now this type of reproducibility applies only to prospective empiricism, not retrospective. What's the difference? Prospective empiricism is where you contemplate ongoing or future behavior. You construct a hypothetical model of the behavior. Within that model will be several intermediate consequential variables that you can examine directly. The model predicts the behavior of those variables. If your model is correct, the variables should behave one way. If the null hypothesis is instead correct, the variables will behave a known different way. You arrange to observe those variables and further arrange for the model to be exercised over a suitable range of inputs. Your model also includes variables you can't fully observe or control. You determine the likely effect those variables have upon the outcome and adjust your expectations accordingly. In other words, you design
ahead of time a way to investigate how the universe functions.
Retrospective empiricism on the other hand simply tries to answer the question, "What happened?" This is what historians have to do, and it's what I have to do in a forensic engineer's capacity to perform my professional duties. This type of study deals in past happenstance events that are not eminently reproducible by their very nature as past, singular events.
The event in question is the production of the Obama PDF. It employed a series of steps, most of them unknown but many of them susceptible to educated guess. Each of those steps embodies variables whose values in each case affect the outcome. The statistical stability of those variables from trial to trial affects the statistical shape of the set of possible outcomes. Unless we can clairvoyantly and omnipotently control for all those variables, the best we can hope for in a re-enactment of the incident is some degree of statistical similarity.
Happenstance events chronically evade replication because unlike experimental events we can plan, observe, and control in great detail ahead of time, happenstance events are generally only observable by means of their already-presented results. We have no means to retrospect or introspect. Further, happenstance events that interest us may also be outliers, whereas ordinary examples of the same type of event pass unnoticed. Outliers in prospective empiricism are generally discarded.
Trying to replicate an automobile accident, for example, to the point of producing identical skid marks and identical dent patterns in the cars, is ludicrously elusive. Yet this is what the Birthers expect. They provide no evidence that the pattern of objects they see in the Obama PDF is due to any sort of forgery -- they blantantly admit that forgery is a default conclusion for them, i.e., elaborately and admittedly begging the question. Yet they expect that someone sitting at a scanner with a paper document should be able to produce a bitwise identical copy of the Obama PDF after only a few trials.
We use retrospective empiricism all the time in forensic engineering. But we do not reasonably expect experiment results that duplicate the happenstance outcome right down to the limit of our ability to observe. That's silly. We expect congruent, comparable results. We expect results that match the quantitative statistics of the model. Determining the most likely cause of a happenstance event involves the scientific method and reproducible experiments. But it does not require replicating the previous outcome in fine.
Quote:
Your opinions, like Ricardo's, are based entirely on supposition but at least Ricardo admits he can't really tell if the BC document is real of fake. Can you?
If you phrase the question like that -- ambiguity with an equal burden of proof on either assertion -- then that's how we know you don't really know anything about science.
Please define "null hypothesis" for the group, Robert, and explain how it applies here and why. It's not like you haven't been asked that question before. In fact, you're asked for it in
every thread where you try to shift the burden of proof. And you have yet to answer it.
Since you've taken it upon yourself to lecture everyone on what the scientific method is and how we're all doing it wrong, you should be able to give us a little background on this in your own words and explain how your claims so much better fit.
Originally Posted by
Robert Prey
Your explanations as to combinatorial complexity and the mathematics therein are very impressive. Very impressive, indeed. But supposition is not replication. And thus, your explanations of combinatorial complexity are just so much empty hot air.
You wish.
You haven't lifted a single finger to provide examples or demonstrations of any of the behavior you say PDFs should properly exhibit. Either you don't know how, or you've done it and found that they dispute your beliefs so you fail to report them.
You're the one supposing that 1,200 trials is statistically significant. You know, Robert, there's a whole branch of science called sample statistics that deals with how much a sample can be said to represent the population from which it was drawn. You can't tell us whether 1,200 trials represent 50% or 0.0001% of the total number of outcomes, and therefore how likely the lowly sheriff's team (highly motivated, no doubt, to vindicate Barack Obama against unfair accusations) was to have stumbled onto the right combination.
In order to know how faithfully a sample represents the population, you need to know the size of the population. Therefore in order to know whether 1,200 samples is enough to have replicated one desired outcome, you need to know how many possible outcomes there are. I asked you to tell me how many there were, and you ignored the question. I asked you to tell me how to determine how many there were, and you ignored the question.
I routinely deal in engineering problems that have thousands of initial-condition starting variables and literally
billions of variables representing unknowns. I know what I'm talking about. You don't.
One also needs to apply the variance in the sample. In terms of discretes or categoricals, this is called the combinatorial complexity of the problem. In terms of continuous variables, ordinary statistical analysis of variance applies. If the samples are highly variable, we call the problem combinatorially unstable and we need more samples to satisfy the margin of error.
I performed experiments, reported here, to test the stability of this problem. As I feared, the problem is highly unstable. Just minor variations in only one of the continuous variables produced a +/- 10 percent variation in the quantitative outcome, and substantial qualitative difference.
Where are your experiments, Robert?
Hence the notion that 1,200 trials performed by biased experimenters in any way falsifies the rebuttals of actual experts who point out the Birthers' extreme ignorance and nonsense, is simply wishful thinking. None of the Birthers has any scientific training, so their fans just buy into the arbitrary number 1,200 as somehow impressively representative.
That doesn't even begin to cover the legal insignificance of this question, which is why none none of Arpaio's grade-school science-project handwaving is admissible in court. You never answered that question either. You're right in that this appeals only to the "court of public opinion" -- i.e., merely grandstanding for the media, with no prayer of actual credibility.