The Foundations of Cognitive Theory

barehl

Master Poster
Joined
Jul 8, 2013
Messages
2,655
The most basic cognitive construct that comes to mind is the Chinese Room. This idea was created to argue against Strong AI and it was mentioned by Dennett. I think we can get some more utility out of it.

The basic notion was that you have a room where pieces of paper with questions are fed through a slot. The person in the room looks up the question in a book and copies the answer on a sheet of paper and feeds this back out through another slot. To make the point that the answers had to come from the book it is stated that the questions and books are in Chinese and that the person in the room doesn't speak Chinese. The questions are looked up strictly by matching the patterns of the Chinese characters.

This construct does not require a specific language or that books are used or pieces of paper. The facilitator in the room could be a person but could also be computer. These details are not important. Some will probably note that this is related to the Turing Test. The method of communication could be writing or typed or voice. The answers could be in books or on punch cards or on tape or on large arrays of hard-drives.

We agree that the neither the data nor the storage medium are conscious. We agree that the facilitator provides no information. So, we start with no consciousness. Watson is a modern example of this.

Questions are entered but as they become more complex the data will not have matches. So, we add more data. This works for a little while however we then realize that it will be impossible to have all permutations of reality. The number of questions is uncountable in the same way that real numbers are uncountable. Therefore there isn't enough information available on the internet and in the Library of Congress to answer these questions and there would never be no matter how large we made the data store.

Assertion 1.) It is not possible to create a working Chinese Room based solely on pattern matching.

If Watson can't do this task on even a theoretical level then we need to add something. Let's try using formal logic and formalized data. This should let us make additional deductions and allow us to answer more questions. Cyc is a modern example of this.

So, we assume that our data is in sets that show relationships. For example, a Cocker Spaniel is a dog. A dog is a mammal. A mammal is a living creature.

This would allow us to generalize. For example, if we have a question about Rex who is a dog we can assume that anything that is generally true of a dog is also true about Rex such as having four legs and being warm-blooded. We would also know that Rex is not a cat or a tree. This is a big improvement. We could state, for example, that John is married to Linda and then ask who John's wife is. This system could come up with the right answer based on marriage as a set and husbands and wives as subsets whereas simple pattern matching would not.

Any simple relationship should be definable in terms of set theory and logic. But, suppose we asked why a helium balloon will rise if you let go of the string. This system would not provide an answer.

Assertion 2.) It is not possible to create a working Chinese Room based on pattern matching, set theory, and logic.


This seems to be a good start and then we can continue.
 
This seems to be a good start and then we can continue.
There are show-stopping flaws in the chain of logic you've presented here, but I want you to actually get where you're trying to go before we get waylaid discussing them, so please continue.

When you're done, then we can assess.
 
I'm correcting the following error because this is the subforum devoted to Science, Mathematics, Medicine, and Technology.

The number of questions is uncountable in the same way that real numbers are uncountable.
False. Questions are finite strings of symbols taken from a fixed finite alphabet (which happens to be Chinese in your example, but that doesn't matter). There is only a countable infinity of finite strings of symbols over a fixed finite alphabet.

I suspect this is just another example of your habit of padding your posts with extraneous details, and getting those details wrong. If I'm wrong about that, please explain how the distinction between countably and uncountably infinite cardinalities affects your argument.
 
Last edited:
The most basic cognitive construct that comes to mind is the Chinese Room.
The Chinese Room is a philosophy thought experiment.
The Chinese Room thought experiment result is that there is no difference between the roles of a computer mapping questions to answers and a person mapping questions to answers. The person does not understand the language so the computer did not understand the language. And without understanding we should not describe the process as thinking (in Searle's opinion).

A way to see that the assertions in the OP are wrong is forget about Chinese. We can use a language that consists of only 10 questions, each with a symbol, e.g. A - J, and answer, e.g. 1 - 10. The facilitator has a book mapping the 10 symbols to answers. We get the same results - a person and computer will do the same mapping without understanding the language and neither would be considered to be thinking.
 
A way to see that the assertions in the OP are wrong is forget about Chinese. We can use a language that consists of only 10 questions, each with a symbol, e.g. A - J, and answer, e.g. 1 - 10. The facilitator has a book mapping the 10 symbols to answers. We get the same results - a person and computer will do the same mapping without understanding the language and neither would be considered to be thinking.

I know this is fairly standard, but on what grounds do we say it isn't thinking? To do so, it seems we have to define thinking as something other than a direct causal chain, which may be begging the question.

Shouldn't we start with asking whether thinking, as a distinct category, even exists? Maybe it really is just switches being thrown, mindlessly, in my head.
 
I know this is fairly standard, but on what grounds do we say it isn't thinking?
It is not really we - Searle argues that without "understanding" (or "intentionality"), we cannot describe what the machine is doing as "thinking...".

IMO this is a bit too circular. If Searle was the one in the room following the computer code then we could equally argue that there is no "understanding" and so no "thinking". That does not mean that Searle in the Chinese room does not have a "mind"! It means that the definition of mind we are using includes thinking based on understanding.
 
It is not really we - Searle argues that without "understanding" (or "intentionality"), we cannot describe what the machine is doing as "thinking...".

IMO this is a bit too circular. If Searle was the one in the room following the computer code then we could equally argue that there is no "understanding" and so no "thinking". That does not mean that Searle in the Chinese room does not have a "mind"! It means that the definition of mind we are using includes thinking based on understanding.

It gets even harder to pin down when you start asking if other animals think. An ant colony sure seems to be acting with goal-directed purpose. But if some other attribute, like introspection, is to be the bright line, then most of my day I'm not a thinker either.

I'm imagining a discussion among birds.
"Artificial flight just isn't possible. If you are going to design every little feather and muscle we have, you might as well just lay another egg instead."
"Hey now, what about planes? Those humans say they are flying."
"Yeah, right. Do they have true independence? Can they take off and land by by reacting to local conditions in real time? Can they modify the shape of their wings?"
"Well, they have ailerons and stuff that does change the wing shape."
"Yeah, in only the clumsiest of ways. We don't need runways to take off or land, and I haven't yet seen a plane with goal-seeking flight behaviors. It's just mimicry, not like natural flight at all."
 
Last edited:
It is not really we - Searle argues that without "understanding" (or "intentionality"), we cannot describe what the machine is doing as "thinking...".

IMO this is a bit too circular. If Searle was the one in the room following the computer code then we could equally argue that there is no "understanding" and so no "thinking". That does not mean that Searle in the Chinese room does not have a "mind"! It means that the definition of mind we are using includes thinking based on understanding.

Which begs the question: what is "understanding" and what is "thinking", and why can one type of information-processing system, namely a network-based electrochemical processor, achieve it, but the other, a semiconductor-based electric processor, not?

Finally, this seems like a useless exercise anyway, a quibble, really. If the machine acts for all practical purposes like it's a "thinking" machine, who cares if it "really" thinks or not?
 
Which begs the question: what is "understanding" and what is "thinking", and why can one type of information-processing system, namely a network-based electrochemical processor, achieve it, but the other, a semiconductor-based electric processor, not?

Finally, this seems like a useless exercise anyway, a quibble, really. If the machine acts for all practical purposes like it's a "thinking" machine, who cares if it "really" thinks or not?


…and the fact that you cannot even begin to definitively answer a single one of your questions very clearly illustrates why so many are so cautious about how we assign such attributes.

If (just for example) ‘what thinks’ becomes the metric by which sentience (and, by extension, legal rights) is established, then the capacity to accurately adjudicate the condition becomes rather important.

It’s called ethics. Science is not too good at it so far.
 
Apologies to Searle, but the idea of the Chinese Room as an argument against machine intelligence is stupid: the fact that the parts don't understand what they are doing doesn't mean that the system as a whole doesn't understand.
 
Why can't the system based on pattern matching, logic, and set theory answer why a helium balloon will rise if you let go of the string?

Helium near standard temperatures and pressures is less dense than air.
The volume of a helium balloon is mostly filled with helium.
Air is a fluid medium.
Positive buoyancy in a fluid medium occurs when an object displaces more mass of fluid than its own mass.
Positively buoyant objects in a fluid medium experience an upward force.
A tether is a tensile member providing a downward force to counteract an upward force.
Most small helium balloons are tethered by a string that is held.
Objects accelerate in the direction of the net forces acting on them.
etc.

There are plenty of fully capable humans who could not correctly answer the question of why a helium balloon will rise if you let go of its string, because they don't know some of the relevant facts. Even fewer will be able to answer the question of why John married Linda.

Must a "Chinese Room" demonstrate omniscience in order to be evaluated as capable of understanding or of intelligence?
 
I could argue that every neuron in my brain is form of Chinese Room. Single neuron certainly do not think nor is conscious!

Using Chinese Room to "disprove" AI is moronic. It is just fig leaf for people that believe in soul or other non-materialistic source of mind.

Why? If you agree that mind is pure product of brain, then you have to agree that non-thinking components can create/maintain/generate/whatever something that actually thinks. In other words, not only human can be bunch* of Chinese Rooms, in fact he has to be bunch of Chinese Rooms. Yay for emergence, I guess?

*I use word "bunch" in same way as one could use word "pond" for what is between americas and eurasia+africa.
 
That is not quite John Searle's Chinese Room argument which is against a philosophical position not the possibility of AIs.
The Chinese room is a thought experiment presented by the philosopher John Searle to challenge the claim that it is possible for a computer running a program to have a "mind" and "consciousness"[a] in the same sense that people do, simply by virtue of running the right program. The experiment is intended to help refute a philosophical position that Searle named "strong AI":

"The appropriately programmed computer with the right inputs and outputs would thereby have a mind in exactly the same sense human beings have minds."
 
Okay, after reading the quote my understanding the Chinese Room didn't change. What is Mader misunderstanding in your opinion?
 
That the argument is not against the existence of any AI - it is against the existence of "strong AI".

The fact that he didn't use the words "strong AI" doesn't mean that's not what mader was talking about. He clearly points out that the chinese room argument could be applied to a human brain just as much as a computer, which means that, if the argument is correct, one of the following must be true:

1. humans, just like computers, cannot have the properties of "strong AI".
2. there is something other than brains involved in human cognition (ie. a soul, or some sort of magic add on that allows us to get past the limits implied by the argument).
 
The fact that he didn't use the words "strong AI" doesn't mean that's not what mader was talking about. He clearly points out that the chinese room argument could be applied to a human brain just as much as a computer, which means that, if the argument is correct, one of the following must be true:

1. humans, just like computers, cannot have the properties of "strong AI".
2. there is something other than brains involved in human cognition (ie. a soul, or some sort of magic add on that allows us to get past the limits implied by the argument).

Does it have to be magic though? Couldn't it just be an emergent property of cells?

For example, if I want to build a sky scraper, I can't do it with stone. Stone doesn't have the properties of steel and there are inherent limits on the material itself. It seems to me there could be something similar involved in trying to "build a brain" with logic circuits.

For the Chinese room, language (and understanding) might be built into the property of being a native speaker, embedded in a personal environment, with a history and so on. I can't imagine the Chinese room is spontaneously talking to itself or reflecting on whether or not a particular answer to a question is going to offend the questioner.

In practice, the "real" speaker will answer the same question differently, depending on context, both internal ("How am I feeling right now?") and external (the girl asking is extremely hot). Without capturing all this, I don't think the room would fair very well.
 
In practice, the "real" speaker will answer the same question differently, depending on context, both internal ("How am I feeling right now?") and external (the girl asking is extremely hot). Without capturing all this, I don't think the room would fair very well.
I don't think it was meant to. The Chinese Room is designed to tackle one question: "does it understand Chinese?" That answer has to be "yes," at least from a black box perspective. If the room doesn't convincingly appear to understand Chinese, then the gotcha when the curtain is whisked away to reveal some shmuck with a lookup table won't have any effect.

Anything else is beside the point. The Chinese Room is not meant to be a model for a full AI, so arguments such as barehl's OP, which demand further knowledge of the world in order to craft appropriate answers, are not valid. You don't even have to resort to emotion or physics to fool the room; "What was your last answer?" will be wrong most of the time. But it'll be wrong in correct Chinese.

Speaking of barehl's OP, will we ever get the promised continuation?
 
I think the best way to understand why the Chinese Room doesn't work is that it is an Argument from Incredulity. "Really!!! You'd say this blind process is intelligent!!!" The Chinese Room is essentially slowing down a process that happens incredibly fast in humans (associating language artifacts with some action, the output of the room in this example), and making it so slooooooow, it no longer appears to be intelligent behavior.
 
I'm correcting the following error because this is the subforum devoted to Science, Mathematics, Medicine, and Technology.
Thank you; I'm always open to a correction and many people on this forum have made corrections to my posts in the past.

Questions are finite strings of symbols taken from a fixed finite alphabet (which happens to be Chinese in your example, but that doesn't matter). There is only a countable infinity of finite strings of symbols over a fixed finite alphabet.

I suspect this is just another example of your habit of padding your posts with extraneous details, and getting those details wrong. If I'm wrong about that, please explain how the distinction between countably and uncountably infinite cardinalities affects your argument.

You do cling to those ad hominems like a child hugging a teddy bear in the dark. But, you have an obvious flaw in your reasoning. A question is an arbitrary length. It could be ten characters or one thousand or one million or one septillion. There is no defined stopping point for a question. But, in terms of pattern matching we can illustrate this fairly simply.

Instances of dogs are countable in the same way that integers are countable. So, while at any given point in time we can only deal with a finite set of dogs in the same way that we can only deal with a finite set of integers, there is no stopping point. We can always have more or larger integers and we could always have more or larger instances of dogs. Admittedly, you will eventually run out of places to put these dogs but you would also run out of storage to write down integers.

But, for every integer there are an infinite number of reals, such as the reals between zero and one. And, for any instance of dog there are an infinite number of questions. This is why they are both uncountable.
 
A way to see that the assertions in the OP are wrong is forget about Chinese. We can use a language that consists of only 10 questions, each with a symbol, e.g. A - J, and answer, e.g. 1 - 10. The facilitator has a book mapping the 10 symbols to answers. We get the same results - a person and computer will do the same mapping without understanding the language and neither would be considered to be thinking.
So, your way of showing that I'm wrong is to start with different premises, make a different argument, and thereby come to a different conclusion? That one did make me laugh. If your theory of philosophy were true then only one argument could exist at a time. The actual way to show that I'm wrong is to either show that one of my premises is wrong or to show a logical flaw in my argument.
 
I know this is fairly standard, but on what grounds do we say it isn't thinking?

You are claiming that a book, filing cabinet, card catalog, or database thinks? If that was your definition then something like a coin sorter would also think. And with this weak of a definition then most computer programs would have many instances of thinking.
 
Finally, this seems like a useless exercise anyway, a quibble, really. If the machine acts for all practical purposes like it's a "thinking" machine, who cares if it "really" thinks or not?

Thus far, the Chinese Room I've describe is not even theoretically capable of "acting like a thinking machine". If this were a trial I would make the objection that your comment assumes fact not in evidence.
 
Apologies to Searle, but the idea of the Chinese Room as an argument against machine intelligence is stupid: the fact that the parts don't understand what they are doing doesn't mean that the system as a whole doesn't understand.
You again are missing the point. The Room I've described so far is not even theoretically capable of doing the same function. If you can describe what it would take to make the room functional, you are welcome to do so.
 
Why can't the system based on pattern matching, logic, and set theory answer why a helium balloon will rise if you let go of the string?
Finally, a halfway decent response.

Helium near standard temperatures and pressures is less dense than air.
Yes, we could do that with a set.

The volume of a helium balloon is mostly filled with helium.
Yes, a helium balloon as a whole weighs less than the volume of air it displaces.

Air is a fluid medium.
This is where you are running into a problem. I can't see how we could describe viscosity, turbulence, and kinetic energy in terms of set theory.

Positive buoyancy in a fluid medium occurs when an object displaces more mass of fluid than its own mass.
We could probably describe this.

Positively buoyant objects in a fluid medium experience an upward force.
I'm not seeing any way to describe force in terms of set theory.

A tether is a tensile member providing a downward force to counteract an upward force.
Most small helium balloons are tethered by a string that is held.
Objects accelerate in the direction of the net forces acting on them.
etc.
Again, I don't think we can describe forces, acceleration, or movement with set theory.

Must a "Chinese Room" demonstrate omniscience in order to be evaluated as capable of understanding or of intelligence?
They don't have to be technical. You could ask what you would do to stay warm if it was cold outside. Someone who is an educator would probably recognize these types of questions right away. For example, I could ask how many plates you would need to set on the table if there are four of you eating dinner. This is easily answered by set theory. I could say that you have iced tea, lemonade, and cherry soda. You get some cherry soda but your guest doesn't like cherry soda. What do you do? This again can be answered by set theory by excluding the drink that they don't like. Many questions like this can be answered but not all of them. Some questions that can be answered by children are beyond set theory. This generally involves anything with an internal process or anything that requires modeling. For example, we had a pencil sharpener that would get full of shavings and you would have to dump them out in the trash can. Children can easily understand this but I don't think it can be described with set theory. These examples seem to indicate where we would have to go next with our Room.
 
I don't think it was meant to. The Chinese Room is designed to tackle one question: "does it understand Chinese?" That answer has to be "yes," at least from a black box perspective. If the room doesn't convincingly appear to understand Chinese, then the gotcha when the curtain is whisked away to reveal some shmuck with a lookup table won't have any effect.

It seems to me that "understanding Chinese" is just another way of saying "understands the world," because disconnected utterances are too loose.

Under the weak standard, the thought experiment breaks down. I don't even need a room. I can put my written question into an envelope, seal it, then open the envelope again to reveal - Hey Presto!- correctly written Chinese. Is the understanding then to be found in the pen, or the paper, or the writer/reader?

Any recipe driven response (as in the "real" room), is simply another form of rearranging the symbols on the paper, in the same way mathematics is just rearranging symbols from some starting position. The critical difference, for me, is that in reality, the same question will generate different responses, and those differences aren't just generated randomly, but have context.
 
If you want us to try to falsify your theory, it helps to say what it actually is.

We weren't talking about my theory. This thread to me is remedial, covering the low level basis of cognitive theory. This is below what I would consider foundational. You claimed to find a show stopping error in my thought experiment about a version of Searle's Chinese Room. Do I think you've actually found an error? No.

I'm still trying to decide how low level I have to go to explain the concepts of my theory, but this thread has not exactly been encouraging. I've seen repeated evidence in the various threads that cognitive theory is much more difficult for people to understand than I had expected.
 
Last edited:
Under the weak standard, the thought experiment breaks down. I don't even need a room. I can put my written question into an envelope, seal it, then open the envelope again to reveal - Hey Presto!- correctly written Chinese. Is the understanding then to be found in the pen, or the paper, or the writer/reader?
I... don't think you read my post right.

When Searle originally posited the Chinese Room argument, expert systems were all the rage in AI. An expert system is basically a giant, complicated lookup table, or an enormous chain of if-thens. Presented with a query, you'd percolate through the table and arrive at an answer. They'd already been built and proven for niche applications where expert knowledge was required (hence the name) like factory floor debugging, and people were full of optimism that the agglomeration of many such tables was the route to real intelligence.

Searle's argument uses a "simple" concept: understanding a foreign language. I don't speak Chinese. If you were to put me in a room with a table of questions and answers, I could perform the role of an expert system, matching query with reply, but I still wouldn't speak Chinese. If the table were big and complicated enough it might appear to an outside observer that I do speak Chinese. But I still would not speak Chinese.

Now, whether that argument is sound or not, I'm trying not to weigh in on, since I want to avoid being sidetracked to hear what barehl has to say. But that is the argument. It's not a Turing Test. The Room doesn't have to convince someone that it is strongly intelligent, merely that it understands Chinese.

We weren't talking about my theory. This thread to me is remedial, covering the low level basis of cognitive theory. This is below what I would consider foundational. You claimed to find a show stopping error in my thought experiment about a version of Searle's Chinese Room. Do I think you've actually found an error? No.

I'm still trying to decide how low level I have to go to explain the concepts of my theory, but this thread has not exactly been encouraging. I've seen repeated evidence in the various threads that cognitive theory is much more difficult for people to understand than I had expected.
Yeah, I'll drink to that. It's like wrestling a falsifiable test out of an MDC applicant.

Still, why don't you try us? Out with your theory, and we shall remedialize it, and bang rocks together, and drool upon it, and gnaw upon its corners, and gradually develop the proto-fundamental understanding you seem to think we lack. At which point your words will still be here, and then we can necro this thread to partake once more, this time treating you like the genius I am certain you will demonstrate yourself to be.
 
Last edited:
Now, whether that argument is sound or not, I'm trying not to weigh in on, since I want to avoid being sidetracked to hear what barehl has to say. But that is the argument. It's not a Turing Test. The Room doesn't have to convince someone that it is strongly intelligent, merely that it understands Chinese.

I think I get you. My assertion was that "strongly intelligent" and "speaks Chinese" are the same thing.

But you are correct, I want to hear more from barehl also.
 
So, your way of showing that I'm wrong is to start with different premises, make a different argument, and thereby come to a different conclusion? .
No, barehl: My way of showing that the assertions in the OP are wrong is to
  1. Start with the same premises except using a different simpler language.
  2. Make the same argument.
  3. Come to the same conclusion as the Chinese room.
  4. Therefore conclude that this functionally equivalent version of the Chinese room makes your assertions in the OP wrong.
That simple language is a counter-example that invalidates your assertions.
Real World Fact 1.) It is possible to create a working "Chinese" room based solely on pattern matching.
Real World Fact 2.) It is possible to create a working "Chinese" room based on pattern matching, set theory, and logic.

The extension to a language such as Chinese also invalidates your assertion - the number of questions that can be asked in any language is finite. All that happens is that the database of answers becomes very much larger.
Also Chinese room as described does not exclude the possibility of the answer of "I do not know" to any question that cannot be matched. Human Chinese speakers would not expect another Chinese speaker to know the answer to every question. They would expect "I do not know" as an answer to some questions. Thus a computer program that answers most questions would pass the Turing test.
 
Last edited:
(snipped just a bit out)
Real World Fact 2.) It is possible to create a working "Chinese" room based on pattern matching, set theory, and logic.

The extension to a language such as Chinese also invalidates your assertion - the number of questions that can be asked in any language is finite.

That can't be right.
"What number comes after 1?"
"What number comes after 2?"
"What number comes after 3?"
...

Wouldn't that be an infinite set of questions?
 
That can't be right.
"What number comes after 1?"
"What number comes after 2?"
"What number comes after 3?"
...

Wouldn't that be an infinite set of questions?
You are right. But the Turing test has a finite number of questions. And a pattern-matching program would answer most or even all of those "What number comes after n" questions by recognizing the pattern.

ETA: Also think about neural networks as the implementation of the AI. A neural network would learn Chinese in a similar way as we do. It would not look up the answer as a one-to-one match to the question. It would have been taught that the answer to "What number comes after n" is "n + 1". So the number of answers to the questions in the Turing Test would be a function of how well "educated" the computer program is.
 
Last edited:
You are right. But the Turing test has a finite number of questions. And a pattern-matching program would answer most or even all of those "What number comes after n" questions by recognizing the pattern.

Is ELIZA a good example of a "Chinese room?" In that, it can fool some people for awhile.

I'm not particularly a fan of the Turing test. It seems to say more about my propensity to being tricked than much else.
 
Is ELIZA a good example of a "Chinese room?" In that, it can fool some people for awhile.
ELIZA would not - the assumption of the Chinese room is that the AI can pass the Turing test. The best result so far looks like convincing 33% of the judges that it was human. Turings original criteria for a pass was "70% of the time after five minutes of conversation".

Another aspect of the Turing test is that the AI can also give incorrect answers (as well as "I do not know"). The Chinese speaker in the room is free to make a mistake in matching the question.
 
This thread to me is remedial, covering the low level basis of cognitive theory. This is below what I would consider foundational.
Agreed.

I'm correcting the following error because this is the subforum devoted to Science, Mathematics, Medicine, and Technology.
Thank you; I'm always open to a correction and many people on this forum have made corrections to my posts in the past.
I was happy to have pleased you by correcting your error.

But then you repeated your error:

Questions are finite strings of symbols taken from a fixed finite alphabet (which happens to be Chinese in your example, but that doesn't matter). There is only a countable infinity of finite strings of symbols over a fixed finite alphabet.

I suspect this is just another example of your habit of padding your posts with extraneous details, and getting those details wrong. If I'm wrong about that, please explain how the distinction between countably and uncountably infinite cardinalities affects your argument.

You do cling to those ad hominems like a child hugging a teddy bear in the dark. But, you have an obvious flaw in your reasoning. A question is an arbitrary length. It could be ten characters or one thousand or one million or one septillion. There is no defined stopping point for a question. But, in terms of pattern matching we can illustrate this fairly simply.

Instances of dogs are countable in the same way that integers are countable. So, while at any given point in time we can only deal with a finite set of dogs in the same way that we can only deal with a finite set of integers, there is no stopping point. We can always have more or larger integers and we could always have more or larger instances of dogs. Admittedly, you will eventually run out of places to put these dogs but you would also run out of storage to write down integers.

But, for every integer there are an infinite number of reals, such as the reals between zero and one. And, for any instance of dog there are an infinite number of questions. This is why they are both uncountable.
You are quite wrong. The spoiler contains a straightforward proof of the following theorem.

Theorem. The set of finite strings over a finite non-empty alphabet is countably infinite.​


ETA: The following proof contains a minor error: the mapping I describe is one-to-one (injective) but not onto (surjective), hence not a bijection after all. (As usual, it's the "easy to prove" part that's wrong.) That bug is easy to fix by using a different mapping, but that mapping would be slightly more complicated. The easier fix is to stick with the mapping described in the original proof, and then apply yet another theorem that says the existence of any one-to-one (injective but not necessarily surjective) mapping from a set A to the natural numbers already proves A is countable. I have left the original (buggy) "proof" unchanged below.

Proof: Let k be the size of the alphabet, and let n be the least positive integer such that k+1 < 2n. We can then assign a distinct n-bit sequence of bits to every symbol in the alphabet without using the sequence of n zeroes. We can then define a bijection between finite strings over the alphabet and the natural numbers as follows:
  • The empty string maps to 0.
  • Every string consisting of a single alphabetic symbol maps to the natural number represented by that symbol's n-bit sequence of bits, interpreted as a binary numeral.
  • If j > 1, the string a1...aj maps to the natural number represented by the jn-bit sequence of bits formed by concatenating the n-bit sequences of the j symbols a1, ..., aj, interpreting that jn-bit sequence of bits as a binary numeral.
It is easy to show that this mapping is one-to-one and onto, hence a bijection. By definition, any set that can be placed in one-to-one correspondence with the natural numbers is countably infinite.
QED

That proof should be accessible even to undergraduate students majoring in computer science. The theorem can be generalized to say the set of all finite strings over any countable alphabet (finite or infinite) is countable. That more general theorem is a simple corollary of these two theorems, which I state without proof:

Theorem. The Cartesian product of two countable sets is countable.

Theorem. A countable union of countable sets is countable.

I suggested you explain how the distinction between countably and uncountably infinite cardinalities affects your argument. You did not do so. We are left with two possibilities.

Possibility 1: Your argument does in some way depend upon the distinction, and your erroneous belief that the set of questions is uncountably infinite undermines that part of your argument.

Possibility 2: Your argument does not actually depend upon the distinction, so this is just another example of your habit of padding your posts with extraneous details, and getting those details wrong.

I'm with Beelzebuddy on this:

Out with your theory, and we shall remedialize it, and bang rocks together, and drool upon it, and gnaw upon its corners, and gradually develop the proto-fundamental understanding you seem to think we lack. At which point your words will still be here, and then we can necro this thread to partake once more, this time treating you like the genius I am certain you will demonstrate yourself to be.
 
Last edited:

Back
Top Bottom