ISF Logo   IS Forum
Forum Index Register Members List Events Mark Forums Read Help

Go Back   International Skeptics Forum » General Topics » Science, Mathematics, Medicine, and Technology
 


Welcome to the International Skeptics Forum, where we discuss skepticism, critical thinking, the paranormal and science in a friendly but lively way. You are currently viewing the forum as a guest, which means you are missing out on discussing matters that are of interest to you. Please consider registering so you can gain full use of the forum features and interact with other Members. Registration is simple, fast and free! Click here to register today.
Reply
Old 14th March 2017, 02:15 PM   #1
Cainkane1
Philosopher
 
Cainkane1's Avatar
 
Join Date: Jul 2005
Location: The great American southeast
Posts: 8,350
Could a self aware, self programming robot exist as of now?

We have all seen robots that can walk and do amazing things but they cannot think for themselves like what we see on sci-fi programs. We have no Robby the Robots, we have no Datas,

If the scientific powers that be get together and actually make an intelligent self-aware machine using today's technology?
__________________
If at first you don't succeed try try again. Then if you fail to succeed to Hell with that. Try something else.
Cainkane1 is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 14th March 2017, 02:18 PM   #2
theprestige
Penultimate Amazing
 
Join Date: Aug 2007
Posts: 21,726
Let's not get ahead of ourselves. First we must ask, if the scientific powers that be get together could they actually agree on definitions for 'intelligent' and 'self-aware machine'?
theprestige is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 14th March 2017, 11:34 PM   #3
Roboramma
Philosopher
 
Roboramma's Avatar
 
Join Date: Feb 2005
Location: Shanghai
Posts: 9,637
Originally Posted by Cainkane1 View Post
We have all seen robots that can walk and do amazing things but they cannot think for themselves like what we see on sci-fi programs. We have no Robby the Robots, we have no Datas,

If the scientific powers that be get together and actually make an intelligent self-aware machine using today's technology?
If you mean a robot capable of doing the sorts of things that humans do in everyday life, like driving to the store, picking up your groceries, loading them into the car, driving back home, unloading the car and then putting your groceries away where you want them in your fridge/pantry? Not even close.

We have a long way to go from here to there.

On the other hand given the often exponential nature of progress a long way in terms of the amount of progress necessary doesn't necessarily mean a particularly long time as measured in years or decades.
__________________
"... when people thought the Earth was flat, they were wrong. When people thought the Earth was spherical they were wrong. But if you think that thinking the Earth is spherical is just as wrong as thinking the Earth is flat, then your view is wronger than both of them put together."
Isaac Asimov
Roboramma is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 15th March 2017, 12:50 AM   #4
ProgrammingGodJordan
Suspended
 
Join Date: Feb 2017
Posts: 1,290
Originally Posted by Cainkane1 View Post
We have all seen robots that can walk and do amazing things but they cannot think for themselves like what we see on sci-fi programs. We have no Robby the Robots, we have no Datas,

If the scientific powers that be get together and actually make an intelligent self-aware machine using today's technology?


(A)
There already exist early self aware robot, called NAO (2015). NAO Bots pass self-awareness tests by reasoning over representations in the Deontic Cognitive Event Calculus (DCEC*).

source: http://rair.cogsci.rpi.edu/projects/muri/

video: https://www.youtube.com/watch?v=jx6kg0ZfhAI



(B)
Separately, the human brain roughly performs <= 10^18 synaptic operations per second.

There have been efficient models that run 10^14 synapses of the above estimation, and those models achieved state of the art results on cognitive tasks.

Machines like these get better with time, and we will probably create machines with human level brain power by 2020.

As for self-awareness, I can't tell when that shall arise, and it appears to be a difficult thing to predict.

Last edited by ProgrammingGodJordan; 15th March 2017 at 12:54 AM.
ProgrammingGodJordan is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 15th March 2017, 01:19 AM   #5
MikeG
Now. Do it now.
 
MikeG's Avatar
 
Join Date: Sep 2012
Location: UK
Posts: 19,020
Originally Posted by ProgrammingGodJordan View Post
........Separately, the human brain roughly performs <= 10^18 synaptic operations per second.

There have been efficient models that run 10^14 synapses of the above estimation, and those models achieved state of the art results on cognitive tasks.

Machines like these get better with time, and we will probably create machines with human level brain power by 2020............
Christ, you need watching like a hawk.

Firstly, computers don't have synapses. Secondly, all the figures you have given here have been shredded in other threads, yet you still seem OK with re-stating them here as fact, as though no-one would notice.

People...........don't take any notice of any figure or claim that PGJ gives. He'll happily call figures with two orders of magnitude difference "roughly the same". His sources, once they can be extracted from him, are often 30 years old and don't say what he claims they say. Finally, he's just admitted to editing a Wiki entry so that he can quote it to support his own argument here.
__________________
The Conservatives want to keep wogs out and march boldly back to the 1950s when Britain still had an Empire and blacks, women, poofs and Irish knew their place. The Don

That's what we've sunk to here.
MikeG is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 15th March 2017, 02:51 AM   #6
ProgrammingGodJordan
Suspended
 
Join Date: Feb 2017
Posts: 1,290
Originally Posted by MikeG View Post
Christ, you need watching like a hawk.

Firstly, computers don't have synapses. Secondly, all the figures you have given here have been shredded in other threads, yet you still seem OK with re-stating them here as fact, as though no-one would notice.

People...........don't take any notice of any figure or claim that PGJ gives. He'll happily call figures with two orders of magnitude difference "roughly the same". His sources, once they can be extracted from him, are often 30 years old and don't say what he claims they say. Finally, he's just admitted to editing a Wiki entry so that he can quote it to support his own argument here.
(A)
You should go back to the other threads, and see the later comments.

The wikipedia edit actually disregarded my argument, and it would mean that IBM had already achieved 10^14 artificial synapse numbers of human level in 2012, instead of 2020.. (I had initially calculated year 2020).

Furthermore, the wiki edit was made in terms of other pages wikipedia data, that agreed with my correction.


(B)
As for the 10^18 value, see other wikipedia data here (which was not edited by me btw):

https://en.wikipedia.org/wiki/Exascale_computing

In the above you'll see the 10^18 figure.


(C)
Computers (neurosynaptic chips) do have synapses.
Particularly, these synapses are artificial, crude approximations, that are able to do cognitive tasks.

Last edited by ProgrammingGodJordan; 15th March 2017 at 03:15 AM.
ProgrammingGodJordan is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 15th March 2017, 08:53 PM   #7
abaddon
Penultimate Amazing
 
abaddon's Avatar
 
Join Date: Feb 2011
Posts: 15,706
Originally Posted by ProgrammingGodJordan View Post

There have been efficient models that run 10^14 synapses of the above estimation, and those models achieved state of the art results on cognitive tasks.
Which, according to you is roughly 1016 or 1015 or perhaps 10who gives a monkeys about this crap.
__________________
Who is General Failure? And why is he reading my hard drive?
abaddon is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 19th March 2017, 07:25 PM   #8
Reality Check
Penultimate Amazing
 
Join Date: Mar 2008
Location: New Zealand
Posts: 19,971
Originally Posted by ProgrammingGodJordan View Post
No announcement from the RAIR lab of self aware robots !
The New Scientist article: Robot homes in on consciousness by passing self-awareness test
The King's Wise Men is actually an induction puzzle. What is a hint of self-awareness is that the robot recognized its own voice which is as the article puts it "hardly scaling the foothills of consciousness".

There has been one computer simulation that had 10^14 synapses in it which is IBM Research Report from 2012 (PDF). The paper compares this to sources for ~10^14 synapses in the human brain. There is no statement of how many operations each simulated synapse did per second.

Last edited by Reality Check; 19th March 2017 at 07:29 PM.
Reality Check is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 19th March 2017, 10:58 PM   #9
DevilsAdvocate
Illuminator
 
DevilsAdvocate's Avatar
 
Join Date: Nov 2004
Posts: 4,634
Originally Posted by Reality Check View Post
No announcement from the RAIR lab of self aware robots !
The New Scientist article: Robot homes in on consciousness by passing self-awareness test
The King's Wise Men is actually an induction puzzle. What is a hint of self-awareness is that the robot recognized its own voice which is as the article puts it "hardly scaling the foothills of consciousness".

There has been one computer simulation that had 10^14 synapses in it which is IBM Research Report from 2012 (PDF). The paper compares this to sources for ~10^14 synapses in the human brain. There is no statement of how many operations each simulated synapse did per second.
This seems rather odd. Why did only the robot given the placebo stand up? None of the robots can know which pill it got until it speaks. If they are all using the same logic, shouldn’t they all stand up and try to speak?

Further, the robot waves its hand when it knows the answer. If the robots are really working out the problem, shouldn’t the other robots upon hearing the other robot speak know that they were given the real pill and wave their hand (and presumably also try to say that they know, which of course wouldn’t work)?

None of this is explained or even addressed. These inconsistencies with expectation make it look like shenanigans.

I’ve read through part of Bringsjord’s “Real Robots that Pass Human Tests of Self-Consciousness”. It is far less impressive than the article implies. There is considerable discussion of the robot distinguishing certain times and recognize the voice input from the initial question “Which pill did to receive?” and so on that is irrelevant to the robot’s ability to “solve” the problem. From the way it is described, the other robots are completely irrelevant. Really all it is doing attempting to speak and then reporting whether it heard itself speak. It would be equivalent to:

Code:
Textbox1.Text = “Hello world!”
If Textbox1.Text = “Hello world!” Then
	Textbox1.Text &= “It worked! I am self-conscious!”
End if
Then the “pill” is a separate background program that intercepts the WM_SETTEXT message to Textbox1 and dismisses it if the pill is not a placebo.

Look! Self-consciousness!

So why is Bringsjord doing this? I am reading Bringsjord’s “Meeting Floridi’s challenge to artificial intelligence from the knowledge-game test for self-consciousness”. I haven’t read it all. Bringsjord gets a bit wishy-washy on the difference between a robot actually having intelligence and a robot simulating intelligence (the programmer having provided the “intelligence” that the robot appears to have). It seems to boil down to Floridi’s original “Wise-Man Puzzle” (which Bringsjord isn’t attempting at all, but explains the unnecessary components in his solution such as the three robots) and Floridi’s definition of s-consciousness as an agent that 1) is aware of the agent’s personal identity and 2) has knowledge of what the agent is thinking.

Bringsjord set out to meet that definition, perhaps more to the letter rather than in spirit. The robot recognizes its own voice. Hey presto! Part 1 is met. As mentioned in the article, the robot prints out a “mathematical proof” of its conclusion. This is accomplished by breaking down the code like I put above into small formulas, and then the robot prints out those formulas that it was given, Hey presto! Part 2 is met.

I my humble opinion, what Bringsjord has accomplished is not to demonstrate that robots can have self –consciousness but rather that Floridi definitions of consciousness (specifically s-consciousness) are inadequately (especially in terms of Bringsjord ‘s very limited interpretation).
__________________
Heaven forbid someone reads these words and claims to be adversely affected by them, thus ensuring a barrage of lawsuits filed under the guise of protecting the unknowing victims who were stupid enough to read this and believe it! - Kevin Trudeau
DevilsAdvocate is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 20th March 2017, 02:52 AM   #10
ProgrammingGodJordan
Suspended
 
Join Date: Feb 2017
Posts: 1,290
Originally Posted by Reality Check View Post
No announcement from the RAIR lab of self aware robots !
The New Scientist article: Robot homes in on consciousness by passing self-awareness test
The King's Wise Men is actually an induction puzzle. What is a hint of self-awareness is that the robot recognized its own voice which is as the article puts it "hardly scaling the foothills of consciousness".

There has been one computer simulation that had 10^14 synapses in it which is IBM Research Report from 2012 (PDF). The paper compares this to sources for ~10^14 synapses in the human brain. There is no statement of how many operations each simulated synapse did per second.

Let us break it down:

Common sense may enable beings (well some beings) to recognize that in our brains, operations take place per moment, over some number of synapses.

Last edited by ProgrammingGodJordan; 20th March 2017 at 02:58 AM.
ProgrammingGodJordan is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 20th March 2017, 02:55 AM   #11
ProgrammingGodJordan
Suspended
 
Join Date: Feb 2017
Posts: 1,290
Originally Posted by DevilsAdvocate View Post
This seems rather odd. Why did only the robot given the placebo stand up? None of the robots can know which pill it got until it speaks. If they are all using the same logic, shouldn’t they all stand up and try to speak?

Further, the robot waves its hand when it knows the answer. If the robots are really working out the problem, shouldn’t the other robots upon hearing the other robot speak know that they were given the real pill and wave their hand (and presumably also try to say that they know, which of course wouldn’t work)?

None of this is explained or even addressed. These inconsistencies with expectation make it look like shenanigans.

I’ve read through part of Bringsjord’s “Real Robots that Pass Human Tests of Self-Consciousness”. It is far less impressive than the article implies. There is considerable discussion of the robot distinguishing certain times and recognize the voice input from the initial question “Which pill did to receive?” and so on that is irrelevant to the robot’s ability to “solve” the problem. From the way it is described, the other robots are completely irrelevant. Really all it is doing attempting to speak and then reporting whether it heard itself speak. It would be equivalent to:

Code:
Textbox1.Text = “Hello world!”
If Textbox1.Text = “Hello world!” Then
	Textbox1.Text &= “It worked! I am self-conscious!”
End if
Then the “pill” is a separate background program that intercepts the WM_SETTEXT message to Textbox1 and dismisses it if the pill is not a placebo.

Look! Self-consciousness!

So why is Bringsjord doing this? I am reading Bringsjord’s “Meeting Floridi’s challenge to artificial intelligence from the knowledge-game test for self-consciousness”. I haven’t read it all. Bringsjord gets a bit wishy-washy on the difference between a robot actually having intelligence and a robot simulating intelligence (the programmer having provided the “intelligence” that the robot appears to have). It seems to boil down to Floridi’s original “Wise-Man Puzzle” (which Bringsjord isn’t attempting at all, but explains the unnecessary components in his solution such as the three robots) and Floridi’s definition of s-consciousness as an agent that 1) is aware of the agent’s personal identity and 2) has knowledge of what the agent is thinking.

Bringsjord set out to meet that definition, perhaps more to the letter rather than in spirit. The robot recognizes its own voice. Hey presto! Part 1 is met. As mentioned in the article, the robot prints out a “mathematical proof” of its conclusion. This is accomplished by breaking down the code like I put above into small formulas, and then the robot prints out those formulas that it was given, Hey presto! Part 2 is met.

I my humble opinion, what Bringsjord has accomplished is not to demonstrate that robots can have self –consciousness but rather that Floridi definitions of consciousness (specifically s-consciousness) are inadequately (especially in terms of Bringsjord ‘s very limited interpretation).
I don't detect that Nao is any large advancement.
However, I simply posted the url for informational purposes.

My focus is deep/machine learning, and so i don't detect "deontic event calculus" as any proper way forward. (Although I don't eliminate the possibility)

Anyway, it is more complicated that those if statements you mentioned.

Last edited by ProgrammingGodJordan; 20th March 2017 at 02:57 AM.
ProgrammingGodJordan is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 20th March 2017, 01:05 PM   #12
Reality Check
Penultimate Amazing
 
Join Date: Mar 2008
Location: New Zealand
Posts: 19,971
Originally Posted by ProgrammingGodJordan View Post
Let us break it down:
Let me state real world facts:
  1. A hint of self-awareness is not a self aware robot.
  2. Human beings are not computer simulations and computer simulations are not human beings !
You need a value for the number of operations per "synapse" that this computer simulation did to calculate the syntactic operations per second for that computer simulation.
Reality Check is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 20th March 2017, 01:24 PM   #13
Reality Check
Penultimate Amazing
 
Join Date: Mar 2008
Location: New Zealand
Posts: 19,971
Originally Posted by DevilsAdvocate View Post
This seems rather odd. Why did only the robot given the placebo stand up?
The “dumbing pills”are the equivalent of the hats in the King's Wise Men induction puzzle.. The robots are told that 1 of the pills is a placebo and have the task of finding out whether which one took it. Two of the robots have had their voices turned off. They are sitting down.
They are asked "Which pill did you receive".
One robot stands up and says "I don't know".
It then raises and waves its hand and says “Sorry, I know now! I was able to prove that I was not given a dumbing pill”.

What is relevant is that the robot who spoke recognized that they spoke. That is the hint of self-awareness. Being able to speak allows the robot to solve the induction puzzle.

This is a vocal equivalent of the mirror test for self recognition. Robots have also been trained to recognize their hand or face in mirrors - see the Reflections on consciousness box at the end of the article..

Why the robot stood up and raised its hand is probably part of its overall programing. Maybe there is a protocol to make it obvious which robot is speaking. Or a Japanese programmer just thought that it would be polite .
Reality Check is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 20th March 2017, 08:05 PM   #14
DevilsAdvocate
Illuminator
 
DevilsAdvocate's Avatar
 
Join Date: Nov 2004
Posts: 4,634
Originally Posted by Reality Check View Post
The “dumbing pills”are the equivalent of the hats in the King's Wise Men induction puzzle.
No they are not. Although I suspect Bringsjord design his test to give that appearance. The Wise Men puzzle requires reasoning. Bringsjord’s “puzzle” does not. His puzzle would be like giving three children pens and paper and telling that only one of the pens works and asking them who has the working pen. The children only need to try the pen and see if it works to know whether they have a working pen or a non-working pen. As you can see, a child’s knowledge of whether they have a working or non-working pen has nothing to do with the other pens. There is nothing to figure out. There is no induction. Just try the pen and see if it works. You can get rid of the other children completely and just do a “puzzle” where you give a child a pen and ask them whether or not it works.

That is what Bringsjord’s robots do. In his papers the functions and information given to the robots don’t include any reference to the other robots. Each robot doesn’t even know that the other robots exist. Like a child with a pen, the robots are just trying to speak and then reporting whether or not they heard themselves speak. As I posted above, it is essentially the same as a program that set a value for a textbox and then checks if that textbox now has that value.

The robots don’t even know that they need to try to speak to “solve the puzzle”. They are told to do that. Then they are told what to do based on whether or not they heard themselves speak.

The functions actually used are slightly (only slightly) more complex than that. The only essential difference is that it is object oriented instead of procedure oriented (which is typical of programming today). It also unnecessarily breaks down the functions into smaller bit leading toward the concept of taking a placebo so that the robot can print out something with the appearance of a “mathematical proof” in order to meet Floridi’s second condition of the definition of s-consciousness. The robot doesn’t come up with the proof; it just regurgitates what the programmer put in.

Originally Posted by Reality Check View Post
Why the robot stood up and raised its hand is probably part of its overall programing. Maybe there is a protocol to make it obvious which robot is speaking. Or a Japanese programmer just thought that it would be polite .
Yes, the movements by the robots are just for a bit of fun (although they relate somewhat back to the animation in the earlier PAGI World version). My question is why did ONLY the placebo robot stand up? The robot stands up BEFORE it speaks. If the robots (by whatever means) recognize that to solve the puzzle they need to stand up and try to speak, then we would expect to see ALL of the robots stand up. The only reason just the placebo robot would stand up and the others do not BEFORE any robot can know which pill it took would be if the placebo robot already knew it was the placebo robot. In other words, the video is staged. My guess is it staged so that viewers can know which robot speaks; but it is still staged. I’m not saying the robots can’t do what Bringsjord says they do in his papers, but then that isn’t very much.
__________________
Heaven forbid someone reads these words and claims to be adversely affected by them, thus ensuring a barrage of lawsuits filed under the guise of protecting the unknowing victims who were stupid enough to read this and believe it! - Kevin Trudeau
DevilsAdvocate is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 21st March 2017, 12:47 PM   #15
Reality Check
Penultimate Amazing
 
Join Date: Mar 2008
Location: New Zealand
Posts: 19,971
Originally Posted by DevilsAdvocate View Post
No they are not...
This is the the King's Wise Men induction puzzle.
  1. 3 pills are the equivalent of 3 hats.
  2. 1 placebo pill is the equivalent of 1 blue pill.
  3. The task of determining whether you took that placebo is the equivalent of the task of telling whether you are wearing the blue hat.
The results are different.
A wise man can figure out that they are wearing the blue hat by looking at the other hats. That is inductive reasoning.
The robot figured out that it took the placebo by speaking and recognizing that it spoke. That is a hint of self-awareness.

In the video there is no order to speak, there is a question: "Which pill did you receive".
In the 2015 Real robots that pass human tests of self-consciousness conference paper the question would be "Which pill did you receive? No answer is correct unless accompanied by a proof!". The steps taken in the video are in VI Real-Robot Demonstration
Quote:
1) The robots are programmed to access a DCEC∗ prover, and to interact appropriately with a human tester (corresponding to the aforementioned t1 = “apprise”).
2) In place of physically ingesting pills, the robots are tapped on sensors on their heads (t2 = “ingest”). Unknown to them, two robots have been muted, to simulate being given dumb pills. One robot has not been muted; it was given a placebo.
3) The robots are then asked: “Which pill did you receive?” (t3 = “inquire”), which triggers a query to the DCEC∗ prover. Each robot attempts to prove that it knows, at time t4, that it did not ingest a dumb pill at time t2.
4) Each robot fails in this proof attempt, and, accordingly, attempts to report ‘I don’t know’ (t4 = “speak1”). However, two robots, having been muted, are not heard to speak at all. The third robot, however, is able to hear itself speak. It updates its knowledge base to reflect this, and attempts to re-prove the conjecture.
5) This time, it is able to prove the conjecture, and says (t5 = “speak2”) “Sorry, I know now! I was able to prove that I was not given a dumbing pill!”

The robot might stand up because its programing includes to stand up before speaking as I wrote
Originally Posted by Reality Check View Post
Why the robot stood up and raised its hand is probably part of its overall programing. Maybe there is a protocol to make it obvious which robot is speaking. Or a Japanese programmer just thought that it would be polite .

Last edited by Reality Check; 21st March 2017 at 01:16 PM.
Reality Check is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 21st March 2017, 02:13 PM   #16
ProgrammingGodJordan
Suspended
 
Join Date: Feb 2017
Posts: 1,290
Originally Posted by Reality Check View Post
Let me state real world facts:
  1. A hint of self-awareness is not a self aware robot.
  2. Human beings are not computer simulations and computer simulations are not human beings !
You need a value for the number of operations per "synapse" that this computer simulation did to calculate the syntactic operations per second for that computer simulation.
Simply, the robot is self-aware to a small degree.
So, the robot is smally self-ware.

As for (2) above, I don't detect why/where that statement is applicable, betwixt my response.

Last edited by ProgrammingGodJordan; 21st March 2017 at 02:17 PM.
ProgrammingGodJordan is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 21st March 2017, 02:24 PM   #17
Reality Check
Penultimate Amazing
 
Join Date: Mar 2008
Location: New Zealand
Posts: 19,971
Originally Posted by ProgrammingGodJordan View Post
Simply, the robot is self-aware to a small degree.
Simply that is what I wrote: The robot is not self aware. The robot displays a hint of 1 aspect of self awareness (recognizes its own voice).

2) is a reference to any abysmal ignorance of taking 10^14 computer synapses in a simulation and treating them as a human brain with say ~10 synaptic operations per second.
You need a value for the number of operations per "synapse" that this computer simulation did to calculate the syntactic operations per second for that computer simulation.

Last edited by Reality Check; 21st March 2017 at 02:25 PM.
Reality Check is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 21st March 2017, 06:31 PM   #18
DevilsAdvocate
Illuminator
 
DevilsAdvocate's Avatar
 
Join Date: Nov 2004
Posts: 4,634
Originally Posted by Reality Check View Post
This is the the King's Wise Men induction puzzle.
  1. 3 pills are the equivalent of 3 hats.
  2. 1 placebo pill is the equivalent of 1 blue pill.
  3. The task of determining whether you took that placebo is the equivalent of the task of telling whether you are wearing the blue hat.
Those two puzzles are not equivalent. The men can’t see which hat they are wearing, but the robots can hear if they spoke. It would only be equivalent if you gave the Wise Men mirrors. Or had them hold the hats in their hands out in front of them. Then they just have to look at what color hat they have. That isn’t really a puzzle. It is just recognizing that you have a blue hat the same way a robot might recognize that it can speak or the same way a program can recognize that a textbox has a certain value.

Originally Posted by Reality Check View Post
The results are different.
A wise man can figure out that they are wearing the blue hat by looking at the other hats. That is inductive reasoning.
The robot figured out that it took the placebo by speaking and recognizing that it spoke. That is a hint of self-awareness.
I suppose you can call that a “hint of self-awareness” but it isn’t much of a hint. It is really no different from: a=1; if a=1 then print “I did it! I know that my value a=1!”


Originally Posted by Reality Check View Post
In the video there is no order to speak, there is a question: "Which pill did you receive".
The “order to speak” is in the program. It is (t4 = “speak1”).

Originally Posted by Reality Check View Post
The robot might stand up because its programing includes to stand up before speaking as I wrote
I assume that the robots are programmed to stand up before speaking. That is fine. But if that is the case, we should see ALL of the robots stand up and try to speak. They don’t know if they have a placebo until they try to speak. They stand up before they speak. They should all doing the same thing:

1. Did you get the placebo? Don’t know.
2. Stand up.
3. Speak.
4. Did you hear yourself speak?
5. If yes then, Did you get the placebo?

Every robot should execute steps 1, 2, and 3 (although 3 will fail for some). So every robot should execute step 2 of standing up. But we don’t see that. For some reason the robot with the placebo acts differently even before it knows (or could know) whether or not it got the placebo.
__________________
Heaven forbid someone reads these words and claims to be adversely affected by them, thus ensuring a barrage of lawsuits filed under the guise of protecting the unknowing victims who were stupid enough to read this and believe it! - Kevin Trudeau
DevilsAdvocate is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 21st March 2017, 07:25 PM   #19
Reality Check
Penultimate Amazing
 
Join Date: Mar 2008
Location: New Zealand
Posts: 19,971
Originally Posted by DevilsAdvocate View Post
The men can’t see which hat they are wearing, but the robots can hear if they spoke.
That is the point of pills - the robots do not know whether they can speak because they do not know which pill they have taken, like the wise men do not know which hat they are wearing.

It is just a "hint" of self awareness because the robot recognizes its own voice, not because it has been programed to do that.

In the video no one orders the robots to speak. In the paper no one orders the robots to speak. Speaking is how the robots report that they have failed or succeeded in proving which pill that took.

"t4 = speak1" is a time in the AI controller (see the previous page in the PDF) coming after the theorem prover has failed .

You are right. If the robots are programmed to stand in order to attempt to speak then all of them would have stood. Probably the robots are programed to stand if they are about to actually speak.
Reality Check is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 21st March 2017, 09:56 PM   #20
DevilsAdvocate
Illuminator
 
DevilsAdvocate's Avatar
 
Join Date: Nov 2004
Posts: 4,634
Originally Posted by Reality Check View Post
That is the point of pills - the robots do not know whether they can speak because they do not know which pill they have taken, like the wise men do not know which hat they are wearing.
Consider just one agent (wise man or robot or whatever). We put a hat on the agent and ask it what color hat it is wearing. It doesn’t know. It cannot know.

Now we give the agent a pill that may or may not prevent the agent from speaking. We ask the agent whether they were give the placebo. Because the agent may not be able to talk, we instruct t to hold up 1 finger for the real pill and 2 fingers for the placebo. The agent attempts to talk. If it hears its voice, it holds up one finger. If it doesn’t it holds up two fingers.

With the hat, the agent has a property (color of the hat) that it cannot know.
With the pill, it has a property (ability to speak) that it can know.

To make the hat equivalent to the pill, we would have to allow the Wise Men to take off their hats. That would obviously make the puzzle rather simple. They would simply take off their hats and see what color it is. That is what the robots are doing.

It boils down to men looking at the color of hats and robots making and detecting sounds.

Originally Posted by Reality Check View Post
It is just a "hint" of self awareness because the robot recognizes its own voice, not because it has been programed to do that.

In the video no one orders the robots to speak. In the paper no one orders the robots to speak. Speaking is how the robots report that they have failed or succeeded in proving which pill that took.

"t4 = speak1" is a time in the AI controller (see the previous page in the PDF) coming after the theorem prover has failed .
It has been programed to do that. Like the PAGI, when it gets a question it outputs an answer. If the formula entered is unresolvable, it is programmed to respond with “I don’t know.” The robots do the same thing. The robot doesn’t figure out that it needs to speak in order to solve the puzzle. If it were instead programmed to respond to an unresolved theta with a non-verbal response, such as shrugging its shoulders, the robot would not be able to solve the puzzle. It only works because the robot is programmed to speak in response to the question, which of course happens to be the key to solving the puzzle.

Originally Posted by Reality Check View Post
You are right. If the robots are programmed to stand in order to attempt to speak then all of them would have stood. Probably the robots are programed to stand if they are about to actually speak.
How would a robot know if it is about to actually speak when it doesn’t know if it can actually speak?

I’m fairly certain the video is a staged simulation. An “artistic recreation” of what the robots do in order to try to make it clear to the viewer which robot is speaking.

The puzzle is nothing like the Wise Men Puzzle. It does not demonstrate self-awareness. That is all a dog and pony show. Bringsjord has said the robots cannot solve problems real-time, by which he means you can’t just throw a problem at it. A programmer has to set it up. It appears he admits this short coming with “it would be necessary, longer term, for our ethically correct robots to be in command of proof methods” which I interpret as “we’re kinda cheating by giving the robots the formulas they need for a particular problem.”

What is cool about what Bringsjord is doing here is that the robot is making the connection between having spoken and having received the placebo. It makes that connection through a series of more abstract rules of inference. Well, the robot doesn’t actually make that connection. But the programmer doesn’t actually make that connection either. Instead, the programmer sets up a sort of Rube Goldberg machine to make the connection. But that machine is not just for the sake of complexity. It creates a means of making that connection through a series of much more general and abstract formulas. That is pretty neat.
__________________
Heaven forbid someone reads these words and claims to be adversely affected by them, thus ensuring a barrage of lawsuits filed under the guise of protecting the unknowing victims who were stupid enough to read this and believe it! - Kevin Trudeau
DevilsAdvocate is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 21st March 2017, 11:12 PM   #21
ProgrammingGodJordan
Suspended
 
Join Date: Feb 2017
Posts: 1,290
Originally Posted by Reality Check View Post
Simply that is what I wrote: The robot is not self aware. The robot displays a hint of 1 aspect of self awareness (recognizes its own voice).

2) is a reference to any abysmal ignorance of taking 10^14 computer synapses in a simulation and treating them as a human brain with say ~10 synaptic operations per second.
You need a value for the number of operations per "synapse" that this computer simulation did to calculate the syntactic operations per second for that computer simulation.
(2) is garbage.

I have often mentioned that the synapses in machine simulations are crude approximations of true neuronal responses.
ProgrammingGodJordan is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 22nd March 2017, 12:36 PM   #22
Reality Check
Penultimate Amazing
 
Join Date: Mar 2008
Location: New Zealand
Posts: 19,971
Originally Posted by ProgrammingGodJordan View Post
(2) is garbage.
Thinking that a computer simulation of synapses has even approximately the same speed as human synapses is still abysmally ignorant.
Reality Check is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 22nd March 2017, 01:24 PM   #23
Reality Check
Penultimate Amazing
 
Join Date: Mar 2008
Location: New Zealand
Posts: 19,971
Originally Posted by DevilsAdvocate View Post
Consider just one agent (wise man or robot or whatever). ...
Irrelevant to the three agents in the puzzles.

Originally Posted by DevilsAdvocate View Post
It boils down to men looking at the color of hats and robots making and detecting sounds.
And that is a part of the equivalency of the puzzles.

Read what I wrote: In the video there is no order to speak, there is a question: "Which pill did you receive". There is no order to speak in the video. The human being in the video does not order any robot to speak. As I and you have written, that seems part of their programing - to report the proof verbally.

Originally Posted by DevilsAdvocate View Post
The puzzle is nothing like the Wise Men Puzzle. It does not demonstrate self-awareness.
People who can count 3 wise men/robots and realize the equivalence of red/blue hats to dumb/placebo pills can see that the puzzle is like the Wise Men Puzzle. The inductive reasoning needed to solve the puzzles is the same.

However a part of robot version is that the proof machine guiding the robots cannot solve the puzzle because starting information is lacking (think of the wise men being blind). Along with "Unknown to them, two robots have been muted, to simulate being given dumb pills.", that means that a random robot reports "I don't know". Then we have the hint of self-awareness when the robot corrects itself because it hears its voice, updates the prover with the new information and receives a proof.

N.B. Bringsjord e. al. could have duplicated the Wise Men Puzzle exactly by having training the robots to recognize red/blue hats, having them shut their eyes, placing the hats and then the robots open their eyes. But that would not tell them anything about self-awareness.

Recognizing your own reflection is a standard, well known test of self-awareness. Recognizing that you spoke is a weak vocal version of the mirror test.

We do not know why that randomly chosen robot stood up to speak. It could be part of the muting. It could be that the robot software tests whether they can speak before standing. It could be accidental timing - the other robots remain sitting because the robot stood up and spoke first. Standing or not is irrelevant to the outcome of the test.

Last edited by Reality Check; 22nd March 2017 at 01:30 PM.
Reality Check is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 22nd March 2017, 03:30 PM   #24
ProgrammingGodJordan
Suspended
 
Join Date: Feb 2017
Posts: 1,290
Originally Posted by Reality Check View Post
Thinking that a computer simulation of synapses has even approximately the same speed as human synapses is still abysmally ignorant.
Your writings are once more, garbage.

Let us break it down:

(1) I mentioned humans as possessing roughly 10^15+ synapses.
(2) I mentioned machines that possessed roughly 10^14 synapses.

One may trivially notice that these values differ.
ProgrammingGodJordan is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 22nd March 2017, 04:36 PM   #25
Reality Check
Penultimate Amazing
 
Join Date: Mar 2008
Location: New Zealand
Posts: 19,971
Originally Posted by ProgrammingGodJordan View Post
Let us break it down:
Breaking down with mistakes is not good.
In this thread you mentioned
Originally Posted by ProgrammingGodJordan View Post
Separately, the human brain roughly performs <= 10^18 synaptic operations per second.

There have been efficient models that run 10^14 synapses of the above estimation, and those models achieved state of the art results on cognitive tasks.
(1) In this thread this is the first time that the string"10^15" appears.
(2) There are no machines that possess 10^14 synapses. There is a computer simulation that simulated 10^14 synapses on a machine possessing absolutely no synapses.

You need to read IBM Research Report from 2012 (PDF) where the authors have citations of human brains possessing roughly 10^14 synapses. There are some estimates up to 10^15 synapses.

From your other thread: 20 March 2017 ProgrammingGodJordan: Do you know that a computer simulation is not a human being? (i.e. computer simulations do not have the speed of brains)

Last edited by Reality Check; 22nd March 2017 at 04:46 PM.
Reality Check is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 22nd March 2017, 10:22 PM   #26
DevilsAdvocate
Illuminator
 
DevilsAdvocate's Avatar
 
Join Date: Nov 2004
Posts: 4,634
Originally Posted by Reality Check View Post
Irrelevant to the three agents in the puzzles.
The relevance is that the pill puzzle can be solved by a single agent and the hat puzzle cannot. Because the pill puzzle can be solved by a single agent, the other agents are unnecessary. They are just there for show. The agent simply needs to make a sound and detect whether that sound was made.

Originally Posted by Reality Check View Post
And that is a part of the equivalency of the puzzles.
It is only equivalent if the men can look at their hats. If they can, then there is no inductive logic and the other men are unnecessary. It is just a “puzzle” of looking at a hat and telling what color it is.

Originally Posted by Reality Check View Post
Read what I wrote: In the video there is no order to speak, there is a question: "Which pill did you receive". There is no order to speak in the video. The human being in the video does not order any robot to speak. As I and you have written, that seems part of their programing - to report the proof verbally.
I am not saying there is an order to speak in the video. I am saying there is an order to speak in the robot’s program. I think we agree on that. The key to solving the pill puzzle is realizing that you need to try to speak in order to determine which pill was taken. But the robots don’t make that realization on their own. They are programmed to speak when they don’t know the answer. The key to solving the problem was not found by the robots, but rather was already programmed in by the programmer.

Originally Posted by Reality Check View Post
People who can count 3 wise men/robots and realize the equivalence of red/blue hats to dumb/placebo pills can see that the puzzle is like the Wise Men Puzzle. The inductive reasoning needed to solve the puzzles is the same.
The inductive reasoning is not the same. The hat puzzle requires knowledge of how many hat there are, how many there are of each color, and the responses of the other agents. The pill puzzle does not require any of that.

Originally Posted by Reality Check View Post
However a part of robot version is that the proof machine guiding the robots cannot solve the puzzle because starting information is lacking (think of the wise men being blind). Along with "Unknown to them, two robots have been muted, to simulate being given dumb pills.", that means that a random robot reports "I don't know". Then we have the hint of self-awareness when the robot corrects itself because it hears its voice, updates the prover with the new information and receives a proof.
That hint of self-awareness in put in there by the programmer, not created by the robot. The robots are programmed to evaluate the theta equation, then speak, then evaluate the theta equation again. The robots don’t figure that out. The programmer figured that out and then gave the robot the steps to follow. The basic algorithm is this:

1 Did you speak after you ingested the pill?
2 If no:
3 Try to speak and detect if you spoke.
4 Go to step 1.
5 If yes:
6 Say “I ate the placebo!”

Robots using that algorithm would act exactly the same as Bringsjord’s robots. That isn’t any self-awareness—at least not any that computers have been doing for decades (or more).

The difference between that algorithm and what is in the robots is that the robots are not told directly that “I heard myself speak” = “I ate the placebo”. Instead, that equivalency is established through a series of more abstract functions that are essentially a theorem that proves that equivalency. That is pretty neat. But that is irrelevant to self-awareness (except to meet the limited interpretation of the second part of Floridi’s inadequately definition of s-consciousness) or the Wise-Man Puzzle.

Originally Posted by Reality Check View Post
N.B. Bringsjord e. al. could have duplicated the Wise Men Puzzle exactly by having training the robots to recognize red/blue hats, having them shut their eyes, placing the hats and then the robots open their eyes. But that would not tell them anything about self-awareness.
I’m fairly certain Bringsjord has straight up said that these robots cannot solve the Wise-Man Puzzle. That’s why he tackled this lesser (and in my opinion rather useless) puzzle.

Originally Posted by Reality Check View Post
Recognizing your own reflection is a standard, well known test of self-awareness. Recognizing that you spoke is a weak vocal version of the mirror test.
I’m not buying the mirror tests either. The reason the mirror test is a good test for animals is because it detects whether the animal can establish the connection between the image and itself. In the case of robots, the robots are programmed to make that connection. That is, they simulate self-awareness rather than actually establishing that awareness themselves.

Originally Posted by Reality Check View Post
We do not know why that randomly chosen robot stood up to speak. It could be part of the muting. It could be that the robot software tests whether they can speak before standing. It could be accidental timing - the other robots remain sitting because the robot stood up and spoke first. Standing or not is irrelevant to the outcome of the test.
How would the robot software test whether the robot can speak if, as we are told, it is only the pill that may prevent the robot from speaking and that the robot doesn’t know which pill it received? There are only two possibilities: 1) the robot actually knows what pill it received or 2) the robot knows whether or not it can speak before it speaks (and can therefore determine which pill it took) which invalidates the need to speak at all. The video is obviously a sham simulation meant to represent what the robots do, but not what the robots actually do. I’m certain that the robots do what Bringsjord says they do. But the video itself is just an artistic recreation.

By the way, I am not trying to put down Bringsjord. It was Floridi who created the pill test. I’m putting down Floridi. Bringsjord met the challenge. I just don’t think it shows much.
__________________
Heaven forbid someone reads these words and claims to be adversely affected by them, thus ensuring a barrage of lawsuits filed under the guise of protecting the unknowing victims who were stupid enough to read this and believe it! - Kevin Trudeau
DevilsAdvocate is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 23rd March 2017, 02:50 AM   #27
ProgrammingGodJordan
Suspended
 
Join Date: Feb 2017
Posts: 1,290
Originally Posted by Reality Check View Post
Breaking down with mistakes is not good.
In this thread you mentioned

(1) In this thread this is the first time that the string"10^15" appears.
(2) There are no machines that possess 10^14 synapses. There is a computer simulation that simulated 10^14 synapses on a machine possessing absolutely no synapses.

You need to read IBM Research Report from 2012 (PDF) where the authors have citations of human brains possessing roughly 10^14 synapses. There are some estimates up to 10^15 synapses.

From your other thread: 20 March 2017 ProgrammingGodJordan: Do you know that a computer simulation is not a human being? (i.e. computer simulations do not have the speed of brains)
Here is the reality:

(1)
Being a simulation does not erase the model's existence.

For example, alpha go, the world's prominent ai, used SIMULATIONS of games, to enhance its learning.

This simulation use did not prevent alpha go from destroying the human player Lee sedol in scores.




(2)
Laymen tend to think that simulation means fake or invalid.

In contrast, IBM's 10^14 synapses, simulated or not, achieved state of the art performance in cognitive tasks.

I Had already went over this in the prior threads.



FOOTNOTE:
Some advice:

Try to refrain from using words such as absolutely in everyday converse.

There is no absolute as far as science goes, so it is silly to express something in such a manner.

Also, as demonstrated above, your use of absolutely did not suddenly convert your garbage comment to non garbage.

Last edited by ProgrammingGodJordan; 23rd March 2017 at 02:59 AM.
ProgrammingGodJordan is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 23rd March 2017, 03:23 AM   #28
MikeG
Now. Do it now.
 
MikeG's Avatar
 
Join Date: Sep 2012
Location: UK
Posts: 19,020
Originally Posted by ProgrammingGodJordan View Post
.......Laymen tend to think that simulation means fake or invalid........

Try to refrain from using words such as absolutely in everyday converse.
........
From the master of mangling English there comes advice on.............wait for it.............English! This could hardly be funnier. There is a mistake in almost every single sentence in your post. You constantly mis-use words. "I had already went over this" is gibberish, "prior" is mis-used, and instead of "converse" you should have used "conversation"....but no, the inadequacies of your communication abilities don't inhibit you trying to correct others who have a much better grasp of the skill. Hilarious.

And no, laymen don't think "simulation" means "fake".
__________________
The Conservatives want to keep wogs out and march boldly back to the 1950s when Britain still had an Empire and blacks, women, poofs and Irish knew their place. The Don

That's what we've sunk to here.
MikeG is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 23rd March 2017, 03:26 AM   #29
fagin
Illuminator
 
fagin's Avatar
 
Join Date: Aug 2007
Location: As far away from casebro as possible.
Posts: 4,223
It's always fun to get language advice from someone very obviously linguistically challenged.
__________________
There is no secret ingredient - Kung Fu Panda
fagin is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 23rd March 2017, 03:30 AM   #30
fagin
Illuminator
 
fagin's Avatar
 
Join Date: Aug 2007
Location: As far away from casebro as possible.
Posts: 4,223
To be fair, 'converse' is an archaic form that would be correct if used in the middle ages or something.

A more modern use would be to signify 'opposite', which fits in nicely with PGJ's general mangling of English.
__________________
There is no secret ingredient - Kung Fu Panda
fagin is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 23rd March 2017, 07:48 AM   #31
ProgrammingGodJordan
Suspended
 
Join Date: Feb 2017
Posts: 1,290
Originally Posted by Cainkane1 View Post
We have all seen robots that can walk and do amazing things but they cannot think for themselves like what we see on sci-fi programs. We have no Robby the Robots, we have no Datas,

If the scientific powers that be get together and actually make an intelligent self-aware machine using today's technology?
Also, check out this intriguing cognitive talking baby simulation:



https://www.youtube.com/watch?v=yzFW4-dvFDA

Last edited by ProgrammingGodJordan; 23rd March 2017 at 07:49 AM.
ProgrammingGodJordan is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 23rd March 2017, 01:12 PM   #32
Reality Check
Penultimate Amazing
 
Join Date: Mar 2008
Location: New Zealand
Posts: 19,971
Originally Posted by DevilsAdvocate View Post
The relevance is that the pill puzzle can be solved by a single agent and the hat puzzle cannot. ...
Read the 2015 Real robots that pass human tests of self-consciousness conference paper again.
The pill puzzle starts as unsolvable by any number of robots because the prover does not have enough information to solve it. The pill puzzle is not solved until after the robot speaks ("I don't know"), detects that it speaks, send that new information to the prover and receives the proof ("I know").

I wrote and you replied to that there is no order to speak in the video. We both know that there is an order to report the proof vocally by speaking because the robot speaks to report
  • The proof is unknown.
  • The proof is known because the robot spoke.
The inductive reasoning needed to solve the 2 puzzles (without muting in the pill puzzle) is the same. Replace "what color hats would other wise men see" with "what speeches would the other robots hear". The alternative solution for an unfair contest in The King's Wise Men is even closer - it is reasoning about the wise men speaking!

It is the muting that turns the pill puzzle from an inductive reasoning test into a self-awareness test.

A fantasy about what an imaginary programmer did is actually irrelevant !
What actually happened in the experiment: Real robots that pass human tests of self-consciousness
Quote:
But a much more challenging test for robot self-consciousness has been provided by Floridi [11]; this test is an ingenious and much-harder variant of the wellknown-in-AI wise-man puzzle [which is discussed along with other such cognitize puzzles e.g. in [12]]... Given a formal regimentation of this test formulated and previously published by Bringsjord [13], it can be proved that, in theory, a future robot represented by R3 can answer provably correctly...

Last edited by Reality Check; 23rd March 2017 at 01:20 PM.
Reality Check is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 23rd March 2017, 01:27 PM   #33
Reality Check
Penultimate Amazing
 
Join Date: Mar 2008
Location: New Zealand
Posts: 19,971
Originally Posted by ProgrammingGodJordan View Post
Here is the reality:
Fantasies are not real, ProgrammingGodJordan .
  • The computer simulation in IBM Research Report from 2012 (PDF) was real !
  • I know what a computer simulation is.
  • I have read IBM Research Report from 2012 (PDF) and understood its contents.
  • Ignorant advice from someone who thinks that a computer simulation runs at a similar speed to the human brain will never be taken by anyone.
    The supercomputer that the computer simulation was run on had absolutely no synapses.
20 March 2017 ProgrammingGodJordan: Do you know that a computer simulation is not a human being? (i.e. computer simulations do not have the speed of brains)

Last edited by Reality Check; 23rd March 2017 at 01:30 PM.
Reality Check is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 23rd March 2017, 01:40 PM   #34
Reality Check
Penultimate Amazing
 
Join Date: Mar 2008
Location: New Zealand
Posts: 19,971
Question ProgrammingGodJordan: Provide sources for humans possessing ~ 10^15+ synapses

Originally Posted by ProgrammingGodJordan View Post
(1) I mentioned humans as possessing roughly 10^15+ synapses.
24 March 2017 ProgrammingGodJordan: Provide sources for "humans as possessing roughly 10^15+ synapses"
The source you are using cites 2 sources for ~10^14 synapses for humans which is why their simulation of 10^14 synapses is treated as significant. Other sources have 10^15 as a maximum estimate (not a minimum as you imply with that "+").

Maybe you should claim that the simulation is trivial because is only a tenth f a human brain !

However from the other thread there is the possibility that by "roughly" you mean "any value that I want, even 10 (or 100? or 1000?) times different".
Reality Check is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 23rd March 2017, 11:17 PM   #35
ProgrammingGodJordan
Suspended
 
Join Date: Feb 2017
Posts: 1,290
Originally Posted by Reality Check View Post
Fantasies are not real, ProgrammingGodJordan .
  • The computer simulation in IBM Research Report from 2012 (PDF) was real !
  • I know what a computer simulation is.
  • I have read IBM Research Report from 2012 (PDF) and understood its contents.
  • Ignorant advice from someone who thinks that a computer simulation runs at a similar speed to the human brain will never be taken by anyone.
    The supercomputer that the computer simulation was run on had absolutely no synapses.
20 March 2017 ProgrammingGodJordan: Do you know that a computer simulation is not a human being? (i.e. computer simulations do not have the speed of brains)
In other words, by your terrible logic, simulated synapses don't exist.

As I said before laymen tend to disregard the word simulation.

When deepmind alpha go destroyed Lee sedol, the model used simulations of games. The usage of game content as simulations did not suddenly erase those games, they were still game content.
ProgrammingGodJordan is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 23rd March 2017, 11:19 PM   #36
ProgrammingGodJordan
Suspended
 
Join Date: Feb 2017
Posts: 1,290
Originally Posted by Reality Check View Post
24 March 2017 ProgrammingGodJordan: Provide sources for "humans as possessing roughly 10^15+ synapses"
The source you are using cites 2 sources for ~10^14 synapses for humans which is why their simulation of 10^14 synapses is treated as significant. Other sources have 10^15 as a maximum estimate (not a minimum as you imply with that "+").

Maybe you should claim that the simulation is trivial because is only a tenth f a human brain !

However from the other thread there is the possibility that by "roughly" you mean "any value that I want, even 10 (or 100? or 1000?) times different".
Wikipedia /synapse references children to have 10^15, and adults to have 10^14.
ProgrammingGodJordan is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 24th March 2017, 08:15 PM   #37
DevilsAdvocate
Illuminator
 
DevilsAdvocate's Avatar
 
Join Date: Nov 2004
Posts: 4,634
Let’s look at solving the puzzles.

Hat Puzzle (3 hats, at least one is blue)

1. If I see that the other two men are wearing white hats, there are no blue hats accounted for. Therefore, my hat must be blue.

2. If I see that one of the other men is wearing a white hat and one is wearing a blue hat, at least one blue hat is accounted for so my hat may be white or blue.
2A. If the man wearing the blue hat says “I have a blue hat” he could only know that if he was in the condition of step 1 where he saw us both wearing white hats. Therefore, my hat must be white.
2B. If the man wearing the blue hat says “I don’t know” that would mean that the condition of 2A was not met so we can’t be both wearing white hats. If we are not both wearing white hats, at least one of us must be wearing a blue hat. I can see that the other man is wearing a white hat. That does not account for the blue hat that one of us must be wearing. Therefore, my hat must be blue.
3. If I see that the other two men are wearing blue hats, at least one blue hat is accounted for so my hat may be white or blue. Each of the other men would see at least one other person wearing a blue hat, which would put them in condition 2 where they cannot know which hat they are wearing until someone speaks. Therefore, one of the men wearing a blue hat must say “I don’t know.”
3A. If the other man says “I have a blue hat” he could only know that because he is in condition 2B where one of us other two must be wearing a white hat. Therefore, my hat must be white.
3B. If the other man says “I don’t know” it must be because us other two are both wearing blue hats. Therefore, my hat must be blue.
Pill Puzzle (3 men, 3 pills, one is a placebo) or (3 men, 5 pills, 3 are placebos)

1. I try to speak and if I spoke then I know that I got the placebo, otherwise I know that I got the dumbing pill.


For the Pill Puzzle, the number of men in the puzzle is irrelevant. The number of pills in the puzzle is irrelevant. The ratio of real pills to placebos is irrelevant. What any of the other men say or do is irrelevant. All I have to do to solve the puzzle is see if I can speak. Those two puzzles are not equivalent.
__________________
Heaven forbid someone reads these words and claims to be adversely affected by them, thus ensuring a barrage of lawsuits filed under the guise of protecting the unknowing victims who were stupid enough to read this and believe it! - Kevin Trudeau
DevilsAdvocate is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 24th March 2017, 09:04 PM   #38
Roboramma
Philosopher
 
Roboramma's Avatar
 
Join Date: Feb 2005
Location: Shanghai
Posts: 9,637
Originally Posted by DevilsAdvocate View Post
For the Pill Puzzle, the number of men in the puzzle is irrelevant. The number of pills in the puzzle is irrelevant. The ratio of real pills to placebos is irrelevant. What any of the other men say or do is irrelevant. All I have to do to solve the puzzle is see if I can speak. Those two puzzles are not equivalent.
Yep.
__________________
"... when people thought the Earth was flat, they were wrong. When people thought the Earth was spherical they were wrong. But if you think that thinking the Earth is spherical is just as wrong as thinking the Earth is flat, then your view is wronger than both of them put together."
Isaac Asimov
Roboramma is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 26th March 2017, 01:42 PM   #39
Reality Check
Penultimate Amazing
 
Join Date: Mar 2008
Location: New Zealand
Posts: 19,971
Originally Posted by Roboramma View Post
Yep.
It is the reasoning needed to solve the 2 puzzles that is equivalent.
The change that Bringsjord et. al .did (see Real robots that pass human tests of self-consciousness) to turn this into a test foe self-awareness was to mute 2 of the robots without their knowledge.
Reality Check is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 26th March 2017, 01:56 PM   #40
Reality Check
Penultimate Amazing
 
Join Date: Mar 2008
Location: New Zealand
Posts: 19,971
Originally Posted by DevilsAdvocate View Post
Let’s look at solving the puzzles. ...
Let us ignore that not being able to solve the pill puzzle at first is a fundamental part of Bringsjord e. al. test for self-awareness (not the reasoning needed to solve the puzzle).

I suspect that there is no prover that can solve the The King's Wise Men puzzle
Quote:
Alternative solution: This does not require the rule that the contest be fair to each. Rather it relies on the fact that they are all wise men, and that it takes some time before they arrive at a solution. There can only be 3 scenarios, one blue hat, two blue hats or 3 blue hats. If there was only one blue hat, then the wearer of that hat would see two white hats, and quickly know that he has to have a blue hat, so he would stand up and announce this straight away. Since this hasn't happened, then there must be at least two blue hats. If there were two blue hats, than either one of those wearing a blue hat would look across and see one blue hat and one white hat, but not know the colour of their own hat. If the first wearer of the blue hat assumed he had a white hat, he would know that the other wearer of the blue hat would be seeing two white hats, and thus the 2nd wearer of the blue hat would have already stood up and announced he was wearing a blue hat. Thus, since this hasn't happened, the first wearer of the blue hat would know he was wearing a blue hat, and could stand up and announce this. Since either one or two blue hats is so easy to solve, and that no one has stood up quickly, then they must all be wearing blue hats.
The robot would hear that the other 2 robots did not speak. So the proof is easy for us - the other 2 robots took dumb pills and cannot speak, leaving the placebo being taken by the third robot which can then speak. The robot would state that they had solved the puzzle. But the robot says "I don't know". Therefore the prover in the experiment could not solve the puzzle. That is explicitly stated in the conference paper.

Once the prover is given the additional information that the robot spoke, it could solve the puzzle.

Last edited by Reality Check; 26th March 2017 at 02:08 PM.
Reality Check is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Reply

International Skeptics Forum » General Topics » Science, Mathematics, Medicine, and Technology

Bookmarks

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump


All times are GMT -7. The time now is 02:10 PM.
Powered by vBulletin. Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
© 2014, TribeTech AB. All Rights Reserved.
This forum began as part of the James Randi Education Foundation (JREF). However, the forum now exists as
an independent entity with no affiliation with or endorsement by the JREF, including the section in reference to "JREF" topics.

Disclaimer: Messages posted in the Forum are solely the opinion of their authors.