IS Forum
Forum Index Register Members List Events Mark Forums Read Help

Go Back   International Skeptics Forum » General Topics » Religion and Philosophy
 


Welcome to the International Skeptics Forum, where we discuss skepticism, critical thinking, the paranormal and science in a friendly but lively way. You are currently viewing the forum as a guest, which means you are missing out on discussing matters that are of interest to you. Please consider registering so you can gain full use of the forum features and interact with other Members. Registration is simple, fast and free! Click here to register today.
Reply
Old 3rd January 2021, 07:12 AM   #121
suren
Scholar
 
suren's Avatar
 
Join Date: Jul 2015
Location: Armenia, Yerevan
Posts: 91
Originally Posted by Dr.Sid View Post
That's also why I think sentient AI is bad idea. It can't add to our survival. Our survival strategy is being the smartest thing around. If we lose that, it's over.
Yeah, regardless of materialistic or non materialistic worldview, most will agree that the wisest thing is to avoid to create such things.
__________________
Follow those who seek the truth, run away from those who have found it.
suren is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 3rd January 2021, 07:13 AM   #122
Darat
Lackey
Administrator
 
Darat's Avatar
 
Join Date: Aug 2001
Location: South East, UK
Posts: 96,060
Originally Posted by suren View Post
Yeah, regardless of materialistic or non materialistic worldview, most will agree that the wisest thing is to avoid to create such things.
Why?
__________________
I wish I knew how to quit you
Darat is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 3rd January 2021, 07:26 AM   #123
suren
Scholar
 
suren's Avatar
 
Join Date: Jul 2015
Location: Armenia, Yerevan
Posts: 91
Originally Posted by Darat View Post
Why?
Because this raises moral and philosophical questions. Also this will likely complicate our lives since we create some creatures that can have different needs. I think it's better to merge humans and robots to fix our bodies' imperfections than to create separate sentient robots. It's more economically profitable to use robots just as tools.
__________________
Follow those who seek the truth, run away from those who have found it.

Last edited by suren; 3rd January 2021 at 07:32 AM.
suren is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 3rd January 2021, 08:26 AM   #124
Dr.Sid
Illuminator
 
Join Date: Sep 2009
Location: Olomouc, Czech Republic
Posts: 3,131
Robots are artificial slaves. They were from the beginning (btw. it's 100 years this year). Sentient AI on the other hand would be artifical master.
Dr.Sid is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 3rd January 2021, 08:58 AM   #125
Darat
Lackey
Administrator
 
Darat's Avatar
 
Join Date: Aug 2001
Location: South East, UK
Posts: 96,060
Originally Posted by Dr.Sid View Post
Robots are artificial slaves. They were from the beginning (btw. it's 100 years this year). Sentient AI on the other hand would be artifical master.
First of all why we would we develop such an AI. Secondly if we did how would they become our masters?
__________________
I wish I knew how to quit you
Darat is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 3rd January 2021, 09:23 AM   #126
theprestige
Penultimate Amazing
 
Join Date: Aug 2007
Location: Hong Kong
Posts: 48,543
Originally Posted by Darat View Post
First of all why we would we develop such an AI. Secondly if we did how would they become our masters?
Hubris.

Hubris and laziness.

We would develop such an AI for the simple reason that we can (assuming we ever can). Our reach ever exceeds our grasp.

And they will become our masters because we're lazy.

One of these days we're going to put entire national power grids under the direct control of expert systems because it's easier and cheaper than hiring thousands of highly-trained humans to manually run all that stuff.
theprestige is online now   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 3rd January 2021, 09:28 AM   #127
Darat
Lackey
Administrator
 
Darat's Avatar
 
Join Date: Aug 2001
Location: South East, UK
Posts: 96,060
Originally Posted by theprestige View Post
...snip...

One of these days we're going to put entire national power grids under the direct control of expert systems because it's easier and cheaper than hiring thousands of highly-trained humans to manually run all that stuff.

In that sense we are already under our “masters”.
__________________
I wish I knew how to quit you
Darat is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 3rd January 2021, 09:33 AM   #128
theprestige
Penultimate Amazing
 
Join Date: Aug 2007
Location: Hong Kong
Posts: 48,543
Originally Posted by Darat View Post
In that sense we are already under our “masters”.
Not in any sense I'm talking about, nor in any sense this thread is discussing, I think.
theprestige is online now   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 3rd January 2021, 09:35 AM   #129
Darat
Lackey
Administrator
 
Darat's Avatar
 
Join Date: Aug 2001
Location: South East, UK
Posts: 96,060
Originally Posted by theprestige View Post
Not in any sense I'm talking about, nor in any sense this thread is discussing, I think.
“Expert systems” are already running things like power stations. - that was the example you gave, why use that example if that isn’t the type of “master” you wanted to illustrate?
__________________
I wish I knew how to quit you
Darat is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 3rd January 2021, 09:43 AM   #130
Dr.Sid
Illuminator
 
Join Date: Sep 2009
Location: Olomouc, Czech Republic
Posts: 3,131
Originally Posted by Darat View Post
“Expert systems” are already running things like power stations. - that was the example you gave, why use that example if that isn’t the type of “master” you wanted to illustrate?
These systems do what we programmed them to do. We have control over them. Real human-like AI will be different. It will have purpose of its own. It might be nice high cause, it might be evil cause. It might be madness.
Dr.Sid is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 3rd January 2021, 09:57 AM   #131
Darat
Lackey
Administrator
 
Darat's Avatar
 
Join Date: Aug 2001
Location: South East, UK
Posts: 96,060
Originally Posted by Dr.Sid View Post
These systems do what we programmed them to do. We have control over them. Real human-like AI will be different. It will have purpose of its own. It might be nice high cause, it might be evil cause. It might be madness.
Again why? And why would we create such AIs and put them in control of us?

Perhaps I’m not being clear with my “whys?”.

We are going to be the ones creating these AIs, why would we design them so they could harm us or output solutions we don’t want? The AIs that don’t behave as we want will be scrapped or debugged until they do.

In regards to them having “intent” or “motivation” if we find we do have to in effect mimic the human neural systems that control and mediate action based on intent, we will - to make them function - have to be supplying the required “inputs” to those structures so we get the right outputs.
__________________
I wish I knew how to quit you
Darat is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 3rd January 2021, 10:50 AM   #132
theprestige
Penultimate Amazing
 
Join Date: Aug 2007
Location: Hong Kong
Posts: 48,543
Originally Posted by Darat View Post
“Expert systems” are already running things like power stations. - that was the example you gave, why use that example if that isn’t the type of “master” you wanted to illustrate? : confused :
Not in the sense I'm talking about. Humans can reason abstractly from a vast wealth of experience and training, to solve problems that weren't expected and lack a preprogrammed solution. Too, humans can reason morally and emotionally.

Computers can reason faster for a lot of stuff.

The kind of system I'm talking about can do both: Reason fast enough to manage a vast and complex system in realtime, make decisions about the human lives that depend on that system, and act on those decisions in the blink of an eye.

The real life version of War Games is going to be about SCADA, not NORAD.
theprestige is online now   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 3rd January 2021, 10:52 AM   #133
theprestige
Penultimate Amazing
 
Join Date: Aug 2007
Location: Hong Kong
Posts: 48,543
Originally Posted by Darat View Post
Again why? And why would we create such AIs and put them in control of us?

Perhaps I’m not being clear with my “whys?”.

We are going to be the ones creating these AIs, why would we design them so they could harm us or output solutions we don’t want? The AIs that don’t behave as we want will be scrapped or debugged until they do.

In regards to them having “intent” or “motivation” if we find we do have to in effect mimic the human neural systems that control and mediate action based on intent, we will - to make them function - have to be supplying the required “inputs” to those structures so we get the right outputs.

We'll do it because we're stupid and arrogant and lazy. The immediate benefits of putting important and complex systems under the control of a sophisticated AI will be too attractive to bother figuring out all the possible failure modes and safeguards implied in that decision. We'll take a stab at it, and do pretty well, and fifty years later someone will say, "well, we probably should have thought of that before we put a computerized sociopath in charge of all the important stuff."
theprestige is online now   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 3rd January 2021, 10:53 AM   #134
Dr.Sid
Illuminator
 
Join Date: Sep 2009
Location: Olomouc, Czech Republic
Posts: 3,131
Originally Posted by Darat View Post
Again why? And why would we create such AIs and put them in control of us?

Perhaps I’m not being clear with my “whys?”.

We are going to be the ones creating these AIs, why would we design them so they could harm us or output solutions we don’t want? The AIs that don’t behave as we want will be scrapped or debugged until they do.

In regards to them having “intent” or “motivation” if we find we do have to in effect mimic the human neural systems that control and mediate action based on intent, we will - to make them function - have to be supplying the required “inputs” to those structures so we get the right outputs.
Of course we won't put them in control. But once they are smarter than us, they will put themselves in control. We will try to limit and isolate them .. but it won't work in the end. Humans can't develop as fast as AI. Sooner or later we won't be able to contain it. And we generally don't understand even simple AIs today.
Dr.Sid is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 3rd January 2021, 11:08 AM   #135
Darat
Lackey
Administrator
 
Darat's Avatar
 
Join Date: Aug 2001
Location: South East, UK
Posts: 96,060
Originally Posted by Dr.Sid View Post
Of course we won't put them in control. But once they are smarter than us, they will put themselves in control. We will try to limit and isolate them .. but it won't work in the end. Humans can't develop as fast as AI. Sooner or later we won't be able to contain it. And we generally don't understand even simple AIs today.
Not going down a rabbit hole and going to ask you what “is” means... But this is a philosophical discussion so we need to make sure we are all using words. In the same way.

You say “smarter” above - yet we are not ruled by the humans we consider to be the “smartest“ why would it be different for AI. Are you using smart in a different way?

I would suggest to become a “master” you need more than just being smart, you need to have a want to be master else our smartest humans would always be in charge of the rest of us, and the bigger question is why should the smarter AI have that want? Why would we have built in that “want”?

Plus if they are smarter perhaps we should have them in “control”.....
__________________
I wish I knew how to quit you
Darat is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 3rd January 2021, 11:27 AM   #136
theprestige
Penultimate Amazing
 
Join Date: Aug 2007
Location: Hong Kong
Posts: 48,543
We've evolved a lot of social safeguards over the centuries, to try to limit the scope of damage a single bad person can do.

None of those safeguards exist for AI yet. Meanwhile the scope of damage that can be done by simple or careless acts has greatly increased.

Then, remember stuxnet? It's not even that we'll put AI we don't understand and shouldn't trust in charge of complex SCADA infrastructure. It's that we almost certainly won't adequately ensure that the AIs we develop can't gain access to the infrastructure network on their own.

But we will give them access on purpose. Because we're arrogant and stupid and lazy. In a nutshell: Hubris.
theprestige is online now   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 3rd January 2021, 11:41 AM   #137
Dr.Sid
Illuminator
 
Join Date: Sep 2009
Location: Olomouc, Czech Republic
Posts: 3,131
Post

Originally Posted by Darat View Post
Not going down a rabbit hole and going to ask you what “is” means... But this is a philosophical discussion so we need to make sure we are all using words. In the same way.

You say “smarter” above - yet we are not ruled by the humans we consider to be the “smartest“ why would it be different for AI. Are you using smart in a different way?

I would suggest to become a “master” you need more than just being smart, you need to have a want to be master else our smartest humans would always be in charge of the rest of us, and the bigger question is why should the smarter AI have that want? Why would we have built in that “want”?

Plus if they are smarter perhaps we should have them in “control”.....
We will build it out of curiosity. Someone will. And of course we will build AI which 'wants', because we will built AI to learn about human intelligence, software of the brain .. and we do 'want', so obviously we want AI to 'want'.

But while human brain can't be easily extended in memory, speed or input devices, electronic brain can be. And as for memory and speed, it will most likely follow Moore's law.

First version will be proof of concept, not real time, easy to watch and control. At some time there will be version will be as good and fast as human. And after few years the next version will be much faster than human, if nothing else. That's what I mean 'smarter'. We haven't met anything smarter yet, so it's hard to predict, what exactly it will look like. Look at animals we exploit. What allows us to do so ? That is what I call being 'smart'.

Also yes, if the AI was smarter than us, it could be great leader. But only if we could control its will. And sooner or later, there will be AI which won't be benevolent. It might be even created as evil, but it might also simply not be wise enough. Same way children would torture instects or even small animals, just for fun.
Dr.Sid is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 4th January 2021, 11:40 PM   #138
maximara
Master Poster
 
Join Date: Feb 2010
Posts: 2,437
I remember a joke I heard a long time ago. "How can you expect computers to pass the Turing test when there are humans that can't seem to pass it?"
maximara is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 12th January 2021, 12:50 PM   #139
Scorpion
Master Poster
 
Scorpion's Avatar
 
Join Date: Feb 2013
Posts: 2,392
As someone who believes we are the ghost in the machine. That is that consciousness is the God created incarnate immortal spirit. I naturally reject AI as an impossibility. But I am also a qualified electronics engineer, trained to repair logic boards and computers.

I know that all electronic devices are simply controlling electron flow, and computers need a programmer to do anything. The illusion of computers being smart is because they are running a program written by humans. Otherwise computers cannot do anything other than add and subtract ones and zeros. The more powerful computers get, the faster they can calculate, and the more they can do. But in the end all they are doing is adding up to one in binary code.

I don't care if we build a computer with a brain the size of a planet, Like Marvin, the robot in the hitch hikers guide to the galaxy. It will still be an adding machine and need a human mind to program it.
__________________
You see many stars in the sky at night, but not when the sun rises. Can you therefore say there are no stars in the heavens during the day? O man because you cannot find God in the days of your ignorance, say not that there is no God.
Sri Ramakrishna
Even in the valley of the shadow of death two and two do not make six.
Leo Tolstoy
Scorpion is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 12th January 2021, 01:10 PM   #140
Dr.Sid
Illuminator
 
Join Date: Sep 2009
Location: Olomouc, Czech Republic
Posts: 3,131
Originally Posted by Scorpion View Post
As someone who believes we are the ghost in the machine. That is that consciousness is the God created incarnate immortal spirit. I naturally reject AI as an impossibility. But I am also a qualified electronics engineer, trained to repair logic boards and computers.

I know that all electronic devices are simply controlling electron flow, and computers need a programmer to do anything. The illusion of computers being smart is because they are running a program written by humans. Otherwise computers cannot do anything other than add and subtract ones and zeros. The more powerful computers get, the faster they can calculate, and the more they can do. But in the end all they are doing is adding up to one in binary code.

I don't care if we build a computer with a brain the size of a planet, Like Marvin, the robot in the hitch hikers guide to the galaxy. It will still be an adding machine and need a human mind to program it.
Oh, lucky you. I'm on the other side. We are totally just software running in the brains. Randomly evolved software. What scares me is I think the core functionality of consciousness, while very different from normal computers, and even most AI approaches, is not that complicated. I think today's computers are quite strong enough to be conscious.
Will the first conscious computer be programmed by human ? Who knows. Maybe we will evolve them artificially, which I wouldn't call programming them. Or we will emulate human (animal) brain, which I wouldn't call programming them either. Or we will at some day understand what consciousness is all about and program them in much simpler way.
In any case, computer will be better host for software, because computers can be easily upgraded, unlike human brains. And that's where I think lies the danger.
Dr.Sid is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 12th January 2021, 04:28 PM   #141
arthwollipot
Observer of Phenomena
Pronouns: he/him
 
arthwollipot's Avatar
 
Join Date: Feb 2005
Location: Ngunnawal Country
Posts: 69,541
Originally Posted by Scorpion View Post
As someone who believes we are the ghost in the machine. That is that consciousness is the God created incarnate immortal spirit. I naturally reject AI as an impossibility. But I am also a qualified electronics engineer, trained to repair logic boards and computers.

I know that all electronic devices are simply controlling electron flow, and computers need a programmer to do anything. The illusion of computers being smart is because they are running a program written by humans. Otherwise computers cannot do anything other than add and subtract ones and zeros. The more powerful computers get, the faster they can calculate, and the more they can do. But in the end all they are doing is adding up to one in binary code.

I don't care if we build a computer with a brain the size of a planet, Like Marvin, the robot in the hitch hikers guide to the galaxy. It will still be an adding machine and need a human mind to program it.
Watch this video. It's 8:54 long, safe for work.

YouTube Video This video is not hosted by the ISF. The ISF can not be held responsible for the suitability or legality of this material. By clicking the link below you agree to view content from an external website.
I AGREE


The tl;dw is that computers are now doing things, and we have no idea how they're doing them. They are no longer simple adding machines. They are programming themselves. They are learning.

This is not, in itself, indicative of artificial general intelligence. It's a step along the way, but by no means the only necessary step. Regardless, your view of computers is obsolete.

Personally, I believe that artificial general intelligence (AGI) is inevitable. If it does not emerge spontaneously, I think that humans will create it, just to see if they can. Arguments will then be had over whether these AGIs have rights. I believe that they should. People like Scorpion will then argue vociferously that since machines do not have souls, they do not deserve rights. I believe that opinion should be treated with the contempt it deserves.
__________________
Please scream inside your heart.
arthwollipot is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 13th January 2021, 08:52 AM   #142
Modified
Philosopher
 
Modified's Avatar
 
Join Date: Sep 2006
Posts: 6,803
Originally Posted by Dr.Sid View Post
I think today's computers are quite strong enough to be conscious.

I think any computer, given enough storage space, is strong enough. The only issue is speed.
Modified is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Reply

International Skeptics Forum » General Topics » Religion and Philosophy

Bookmarks

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump


All times are GMT -7. The time now is 12:11 PM.
Powered by vBulletin. Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.

This forum began as part of the James Randi Education Foundation (JREF). However, the forum now exists as
an independent entity with no affiliation with or endorsement by the JREF, including the section in reference to "JREF" topics.

Disclaimer: Messages posted in the Forum are solely the opinion of their authors.