ISF Logo   IS Forum
Forum Index Register Members List Events Mark Forums Read Help

Go Back   International Skeptics Forum » General Topics » Science, Mathematics, Medicine, and Technology
 


Welcome to the International Skeptics Forum, where we discuss skepticism, critical thinking, the paranormal and science in a friendly but lively way. You are currently viewing the forum as a guest, which means you are missing out on discussing matters that are of interest to you. Please consider registering so you can gain full use of the forum features and interact with other Members. Registration is simple, fast and free! Click here to register today.
Reply
Old Yesterday, 03:00 AM   #281
MetalPig
Illuminator
 
MetalPig's Avatar
 
Join Date: Jan 2006
Location: 22, Acacia Avenue
Posts: 3,021
Originally Posted by Marasmusine View Post
Bogomolny acknowledges this and writes in his definition:
"Uniformly random selection is only possible from a finite set; so this is what we do: we always select from a finite set but do this repeatedly, each time selecting from a bigger set." (followed by the mathematical definition)

That's the same mistake I made.


I learned that "always select from a finite set but do this repeatedly, each time selecting from a bigger set" is not that same as "select from an infinite set".
__________________
Just drive.
MetalPig is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old Yesterday, 04:31 AM   #282
whoanellie
Critical Thinker
 
Join Date: Apr 2012
Posts: 326
post deleted

Last edited by whoanellie; Yesterday at 04:56 AM.
whoanellie is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old Yesterday, 09:01 AM   #283
Myriad
Hyperthetical
 
Myriad's Avatar
 
Join Date: Nov 2006
Location: Pennsylvania
Posts: 13,915
Consider the algorithm I mentioned before. Select 1 as the candidate; then with probability 1/2 change the candidate to 2, then with probability 1/3 change the candidate to 3, and so forth. At each step before the consideration of candidate n, the current candidate is one of the integers from 1 to n-1, with equal probability 1/(n-1). After the random 1/n chance of switching to candidate n, each of the integers from 1 to n will be the current candidate with probability 1/n.

There's a "more efficient" version that starts with candidate 1, then for all n={0, 1, 2, ...} gives a 50% chance of adding 2^n to the candidate. This has some advantages, such as doubling the number of possible candidates considered at each step, and thus taking only logarithmic time (which of course is still infinite); and only needing a 50-50 random generator instead of one that can correctly roll arbitrary infinitesimal chances. It amounts to randomly generating a binary integer one bit at a time from right to left.

You could also do the same with, say, right to left base 10 digits. Start with 1, then for all n = {0, 1, 2...} add an equally distributed random selection from {0, 10^n, 2*10^n, ... 9*10^n}. That only needs a slightly more capable RNG (a d10 instead of a coin flip), and is another 3.3 times faster.

With all these procedures, you can show that for any given positive integer, that integer has a fair chance at some specific iteration to become the candidate, and then to remain the candidate in all subsequent iterations. However, as with many other supertasks, there's no way to characterize what the end results would be. Consider this: replace the random chance in either of the above algorithms with a certainty (always selecting 9*10^n in the latter case), and it becomes a supposed method for finding the highest integer. Which is pretty much meaningless.

On the other hand, the second procedure appears to tell you right away whether the eventual result will be odd or even. After the first (n=0) step, you know what the least significant bit must be, regardless of what happens subsequently. Likewise, the third procedure appears to tell you right away whether or not the eventual random integer will be a multiple of 5, as well as odd or even, since only multiples of 10 can be added to it after the n=0 step. And it tells you whether or not the eventual random number will be a multiple of a billion, after the n=8 step. So provided the infinite procedure is trusted to yield an integer at all, rather than, say, a giraffe or a black hole, can't you go ahead and make use of the determination of oddness or evenness or multiple-of-a-billion-ness etc. while you're waiting forever for the final result? And can't you examine the probabilities of those finite initial steps to determine the probability of those early-emerging outcomes?

So it's tempting to think, as Bolgomolny does, that you can. That is, IF your necessarily imperfect definition of "a randomly selected integer" includes the "as fair as possible" requirement outlined above: that for a given positive integer, at some specific iteration of the selection process, that integer has a fair chance (equal to that of all previous or co-candidates as of that iteration) to be the candidate, and then to remain the candidate in all subsequent iterations.

Of course, there are other orderings and other "random" selection algorithms that could be used that would alter the expectations. For instance, one corresponding to the "corridor A/corridor B" arrangement frequently mentioned upthread, where all the multiples-of-a-billion numbered hotel rooms and only those (those also being the supposedly exceptional "opulent" or "dingy" rooms) are located on one corridor, and the selection process begins with a coin flip to choose which corridor. (That corresponds to a modification of the base-10-digit selection algorithm, where at the outset with a 50% chance you set the candidate at one billion and then start iterating with n=9.) That appears to yield an equal chance of selecting a multiple-of-a-billion integer as otherwise, but it clearly fails the as-fair-as-possible condition.

Note that the first algorithm I outlined, which iterates through the set of integers by 1's, meets the as-fair-as-possible condition, but makes no early (in any finite number of steps) determination of whether the eventual selection will be odd or even etc.

So to sum up, is it possible to salvage the intuitive notion of a 1 in a billion chance of selecting a multiple of a billion from the infinite set of positive integers (i.e. being assigned an exceptional hotel room), even if we don't know exactly what the selection process is, provided that (a) the selection process yields an unalterable decision on whether or not the eventual outcome is a multiple of a billion within a finite number of steps, and (b) the selection process meets the as-fair-as-possible condition?
__________________
A zÝmbie once bit my sister...
Myriad is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old Yesterday, 06:08 PM   #284
caveman1917
Philosopher
 
Join Date: Feb 2015
Posts: 5,828
Originally Posted by Myriad View Post
Consider the algorithm I mentioned before. Select 1 as the candidate; then with probability 1/2 change the candidate to 2, then with probability 1/3 change the candidate to 3, and so forth. At each step before the consideration of candidate n, the current candidate is one of the integers from 1 to n-1, with equal probability 1/(n-1). After the random 1/n chance of switching to candidate n, each of the integers from 1 to n will be the current candidate with probability 1/n.
Note that the results can be changed at will by reordering the candidates. ETA2: Interpreting statements such as "select 1 as the candidate" as "select the first candidate".

Quote:
There's a "more efficient" version that starts with candidate 1, then for all n={0, 1, 2, ...} gives a 50% chance of adding 2^n to the candidate. This has some advantages, such as doubling the number of possible candidates considered at each step, and thus taking only logarithmic time (which of course is still infinite); and only needing a 50-50 random generator instead of one that can correctly roll arbitrary infinitesimal chances. It amounts to randomly generating a binary integer one bit at a time from right to left.

You could also do the same with, say, right to left base 10 digits. Start with 1, then for all n = {0, 1, 2...} add an equally distributed random selection from {0, 10^n, 2*10^n, ... 9*10^n}. That only needs a slightly more capable RNG (a d10 instead of a coin flip), and is another 3.3 times faster.
How does one add/multiply/exponentiate hotel rooms though? I think you should be more precise whether you're talking about N (and what exactly you mean by N, do you mean the set N, or (N, ≤), or (N, ≤, +) or ...?) or a set of hotel rooms - let's call it H - or some arbitrary set X such that |X| = |N|.

Quote:
So it's tempting to think, as Bolgomolny does, that you can. That is, IF your necessarily imperfect definition of "a randomly selected integer" includes the "as fair as possible" requirement outlined above: that for a given positive integer, at some specific iteration of the selection process, that integer has a fair chance (equal to that of all previous or co-candidates as of that iteration) to be the candidate, and then to remain the candidate in all subsequent iterations.

Of course, there are other orderings and other "random" selection algorithms that could be used that would alter the expectations. For instance, one corresponding to the "corridor A/corridor B" arrangement frequently mentioned upthread, where all the multiples-of-a-billion numbered hotel rooms and only those (those also being the supposedly exceptional "opulent" or "dingy" rooms) are located on one corridor, and the selection process begins with a coin flip to choose which corridor. (That corresponds to a modification of the base-10-digit selection algorithm, where at the outset with a 50% chance you set the candidate at one billion and then start iterating with n=9.) That appears to yield an equal chance of selecting a multiple-of-a-billion integer as otherwise, but it clearly fails the as-fair-as-possible condition.
Simply reordering N and applying your first algorithm we can make the result anything we want and not fail the as-fair-as-possible condition. Note that all your algorithm requires is a notion of the n-th candidate, which we can define based on order. For example consider the reordering in my previous post:
Originally Posted by caveman1917 View Post
Consider the following where di and ndi are the i-the natural number divisible or non-divisible by 1e9 respectively:

{d1}, {d1, nd1}, {d1, nd1, d2}, ... -> N

and apply P:

P({d1}) = 1, P({d1, nd1}) = 1/2, P({d1, nd1, d2}) = 2/3, ... -> P(N) = 1/2
Originally Posted by Myriad
Note that the first algorithm I outlined, which iterates through the set of integers by 1's, meets the as-fair-as-possible condition, but makes no early (in any finite number of steps) determination of whether the eventual selection will be odd or even etc.
So does my reordering.

Quote:
So to sum up, is it possible to salvage the intuitive notion of a 1 in a billion chance of selecting a multiple of a billion from the infinite set of positive integers (i.e. being assigned an exceptional hotel room), even if we don't know exactly what the selection process is, provided that (a) the selection process yields an unalterable decision on whether or not the eventual outcome is a multiple of a billion within a finite number of steps, and (b) the selection process meets the as-fair-as-possible condition?
My guess on (a) is: Only if you assume at least the structure (X, ≤, +, x) so that you can do (modular) arithmetic "tricks" such as that least-significant-bits trick. It's also hinted at by Bolgomolny when he talks about N/n but then for some reason doesn't use it in his actual definition, which is the equivalent of natural density.

My guess on (b) is: Any such procedure will be able to be made to give any result we want by reordering the set, and the procedure will remain meeting this condition.

ETA: My guess on the larger question "is it possible to salvage the intuitive notion of a 1 in a billion chance": It's only intuitive because you're used to thinking in terms of natural numbers which have an intuitive natural ordering (note how the concept of natural density can be extended to a general density with any well-order, with natural density being a special case where the order is the natural order). The reason you're having this intuition in the first place is because you're letting structure "leak" from (N, ≤) to X where we only have |X| = |N| and have nothing about any order on X. The intuition is wrong and results from a failure to clearly delineate between things like X and (X, ≤), it shouldn't be salvaged, it should be removed.
__________________
"Ideas are also weapons." - Subcomandante Marcos
"We must devastate the avenues where the wealthy live." - Lucy Parsons
"Let us therefore trust the eternal Spirit which destroys and annihilates only because it is the unfathomable and eternal source of all life. The passion for destruction is a creative passion, too!" - Mikhail Bakunin

Last edited by caveman1917; Yesterday at 06:58 PM.
caveman1917 is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Reply

International Skeptics Forum » General Topics » Science, Mathematics, Medicine, and Technology

Bookmarks

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump


All times are GMT -7. The time now is 04:32 PM.
Powered by vBulletin. Copyright ©2000 - 2018, Jelsoft Enterprises Ltd.

This forum began as part of the James Randi Education Foundation (JREF). However, the forum now exists as
an independent entity with no affiliation with or endorsement by the JREF, including the section in reference to "JREF" topics.

Disclaimer: Messages posted in the Forum are solely the opinion of their authors.