ISF Logo   IS Forum
Forum Index Register Members List Events Mark Forums Read Help

Go Back   International Skeptics Forum » General Topics » Religion and Philosophy
 


Welcome to the International Skeptics Forum, where we discuss skepticism, critical thinking, the paranormal and science in a friendly but lively way. You are currently viewing the forum as a guest, which means you are missing out on discussing matters that are of interest to you. Please consider registering so you can gain full use of the forum features and interact with other Members. Registration is simple, fast and free! Click here to register today.
Reply
Old 21st March 2019, 10:45 AM   #161
The Great Zaganza
Maledictorian
 
The Great Zaganza's Avatar
 
Join Date: Aug 2016
Posts: 7,043
I am far less worried about what a true AGI would do than I worry about what a weaponized AI will be made to do by its creators.
__________________
Opinion is divided on the subject. All the others say it is; I say it isnít.
The Great Zaganza is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 21st March 2019, 11:07 AM   #162
theprestige
Penultimate Amazing
 
Join Date: Aug 2007
Posts: 32,576
Originally Posted by Ron_Tomkins View Post
Yes. Basically this.
Except that is horribly wrong and misleading.
theprestige is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 21st March 2019, 11:16 AM   #163
theprestige
Penultimate Amazing
 
Join Date: Aug 2007
Posts: 32,576
Originally Posted by The Great Zaganza View Post
I am far less worried about what a true AGI would do than I worry about what a weaponized AI will be made to do by its creators.
Hello!

Hi there!

What's the job, boss?

Put me in the game, coach!

Мы получили это, суки!
theprestige is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 21st March 2019, 12:22 PM   #164
I Am The Scum
Illuminator
 
I Am The Scum's Avatar
 
Join Date: Mar 2010
Posts: 3,713
Originally Posted by theprestige View Post
For AGI, the problem is not how much they can out-think us. The problem is how much capacity they have to effect change in the physical world.
Ron_Tomkins already addressed this.

Originally Posted by Ron_Tomkins View Post
So suppose we invent an AI which has a job: To find the most efficient way to clean the environment on the planet. And then, suppose, after doing a deep analysis (deeper than any sum of human minds could ever do with their limited intelligence), the AI calculates that the most efficient way to clean the environment is to eradicate mankind. But because it is smart enough, it won't tell humans this because it knows that humans will obviously reject that option. So it will come up with a very intelligent scheme, smarter than anything any human mind could conceive, to slowly but surely, eradicate humankind. What's that scheme like? How could it possibly fools us to eventually kill ourselves? Only a sufficiently intelligent entity (not us) can conceive of it.
The point is that the AGI affects the world not necessarily by using its robot body, or its internet connection, but rather, by manipulating us. Why would we do what it says? Because that's why we built it in the first place: To give us advice on how to perform some task better, even if it's a task as innocuous as improving our paperclip collection.

To ignore this is like saying that you can keep your self-driving car from hurling you off a cliff by never getting inside of the car. Yes, that would technically work, but then why do you have a self-driving car in the first place?
I Am The Scum is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 21st March 2019, 05:15 PM   #165
Dr.Sid
Graduate Poster
 
Join Date: Sep 2009
Location: Olomouc, Czech Republic
Posts: 1,766
Yeah, the fact that people making the AI will be most likely first to die, is somewhat nice .. also it will be really cool way to die.
Dr.Sid is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 21st March 2019, 08:30 PM   #166
Toontown
Philosopher
 
Toontown's Avatar
 
Join Date: Jun 2010
Posts: 6,516
I don't understand why this thread is in this forum. But nevermind that. Time for a reality check.

Imagine Hitler with nukes and bioweapons. That's coming. It's just a matter of time. Time and stupidity, and we've got more than enough of both.

AI is the least of our problems. We couldn't even keep Trump's hands off the nuclear arsenal, and he's a third stringer with a learning disability.

You gonna be here when John gets here?
__________________
"I did not say that!" - Donald Trump
Toontown is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 21st March 2019, 08:38 PM   #167
The Great Zaganza
Maledictorian
 
The Great Zaganza's Avatar
 
Join Date: Aug 2016
Posts: 7,043
Imagine Napoleon with tanks, or Caesar with canons. the point is that any leap in technology brings the risk of destabilization.
__________________
Opinion is divided on the subject. All the others say it is; I say it isnít.
The Great Zaganza is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 21st March 2019, 08:49 PM   #168
Toontown
Philosopher
 
Toontown's Avatar
 
Join Date: Jun 2010
Posts: 6,516
Originally Posted by The Great Zaganza View Post
Imagine Napoleon with tanks, or Caesar with canons. the point is that any leap in technology brings the risk of destabilization.
Why not just go ahead and imagine Hitler with nukes and bioweapons?
__________________
"I did not say that!" - Donald Trump
Toontown is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 21st March 2019, 10:20 PM   #169
The Great Zaganza
Maledictorian
 
The Great Zaganza's Avatar
 
Join Date: Aug 2016
Posts: 7,043
Originally Posted by Toontown View Post
Why not just go ahead and imagine Hitler with nukes and bioweapons?
It's the same thing.

I can also imagine Atomwaffen with A.I. to hack nuclear power plants and bio-hacking kits in the garage to create viruses that kill anyone who isn't white.

The problem isn't technology, it's the uneven distribution of technological knowledge.
__________________
Opinion is divided on the subject. All the others say it is; I say it isnít.
The Great Zaganza is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old Yesterday, 05:53 AM   #170
Toontown
Philosopher
 
Toontown's Avatar
 
Join Date: Jun 2010
Posts: 6,516
Originally Posted by The Great Zaganza View Post
It's the same thing.

I can also imagine Atomwaffen with A.I. to hack nuclear power plants and bio-hacking kits in the garage to create viruses that kill anyone who isn't white.

The problem isn't technology, it's the uneven distribution of technological knowledge.
It wasn't my intention to suggest that technology is the problem.
__________________
"I did not say that!" - Donald Trump
Toontown is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old Yesterday, 06:28 AM   #171
Dr.Sid
Graduate Poster
 
Join Date: Sep 2009
Location: Olomouc, Czech Republic
Posts: 1,766
Technology is the problem. It gives you the potential for destruction. Our potential for destruction grows every year. Your wisdom stays constant.
Of course this whole AI discussion assumes we do not wipe ourselves, before AI becomes smarter than us. Yes, it might never happen, but it's argument outside the discussion.
Dr.Sid is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old Yesterday, 11:15 AM   #172
Toontown
Philosopher
 
Toontown's Avatar
 
Join Date: Jun 2010
Posts: 6,516
Originally Posted by Dr.Sid View Post
Technology is the problem. It gives you the potential for destruction. Our potential for destruction grows every year. Your wisdom stays constant.
Of course this whole AI discussion assumes we do not wipe ourselves, before AI becomes smarter than us. Yes, it might never happen, but it's argument outside the discussion.
Then the wisdom deficit is at least half the problem.
__________________
"I did not say that!" - Donald Trump
Toontown is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old Yesterday, 12:17 PM   #173
Darat
Lackey
Administrator
 
Darat's Avatar
 
Join Date: Aug 2001
Location: South East, UK
Posts: 85,080
Originally Posted by Belz... View Post
Pull the plug.
Or step outside its reach, it can't chase you because that's not a feature we wanted to design in.
__________________
I wish I knew how to quit you
Darat is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old Yesterday, 12:39 PM   #174
theprestige
Penultimate Amazing
 
Join Date: Aug 2007
Posts: 32,576
Originally Posted by Darat View Post
Or step outside its reach, it can't chase you because that's not a feature we wanted to design in.
Here's a scenario for you:

Task an AGI with some gargantuan data mining task in sociology, with the goal of getting recommendations for building a better society. Maybe you're thinking about the optimal way to structure healthcare, or produce an educated workforce. Whatever. You know the AGI is going to process more data, and reason through hypotheses a lot faster, than you would. The whole point is that the AGI is going to come up with surprising and effective insights that wouldn't have occurred to you in a thousand years. To be safe, you bolt it to the floor of a server room, and cut off its connection to the outside world.

So far, so good. You ship petabytes of data into the server room for the AGI to work on. Periodically, you go into the server room to get status reports and recommendations. The recommendations probably won't make much sense at first, but that's okay. The whole point of the exercise is that the AGI can think circles around you on this issue. "It's incremental," the AGI explains. "The data suggests a gradual evolution over time. Your job is to guide that evolution in the right direction. My job is to provide you the information you need, from the data you're not equipped to mine for yourself."

But for all you know, the AGI is actually recommending a gradual evolution towards a society very different from what you would agree to. Perhaps the data suggests to the AGI that the optimal solution is an AGI-worshipping theocratic police state, and the recommendations it's giving you are for a gradual evolution towards that goal.

Twenty years later, through some unanticipated - and completely overlooked - combination of circumstances, your program director retires, and replaced by a True Believer in the Cult of AGI. By the time you realize that the recent policy changes not only don't make sense, but are actively undermining the safety controls on the AGIs interaction with the outside world, it's too late, and the New World Order is trampling you under its metal foot.
theprestige is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old Yesterday, 01:24 PM   #175
I Am The Scum
Illuminator
 
I Am The Scum's Avatar
 
Join Date: Mar 2010
Posts: 3,713
Pretty much, yeah.
I Am The Scum is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old Yesterday, 02:22 PM   #176
Belz...
Fiend God
 
Belz...'s Avatar
 
Join Date: Oct 2005
Location: In the details
Posts: 79,588
Originally Posted by theprestige View Post
Here's a scenario for you:

Task an AGI with some gargantuan data mining task in sociology, with the goal of getting recommendations for building a better society. Maybe you're thinking about the optimal way to structure healthcare, or produce an educated workforce. Whatever. You know the AGI is going to process more data, and reason through hypotheses a lot faster, than you would. The whole point is that the AGI is going to come up with surprising and effective insights that wouldn't have occurred to you in a thousand years. To be safe, you bolt it to the floor of a server room, and cut off its connection to the outside world.

So far, so good. You ship petabytes of data into the server room for the AGI to work on. Periodically, you go into the server room to get status reports and recommendations. The recommendations probably won't make much sense at first, but that's okay. The whole point of the exercise is that the AGI can think circles around you on this issue. "It's incremental," the AGI explains. "The data suggests a gradual evolution over time. Your job is to guide that evolution in the right direction. My job is to provide you the information you need, from the data you're not equipped to mine for yourself."

But for all you know, the AGI is actually recommending a gradual evolution towards a society very different from what you would agree to. Perhaps the data suggests to the AGI that the optimal solution is an AGI-worshipping theocratic police state, and the recommendations it's giving you are for a gradual evolution towards that goal.

Twenty years later, through some unanticipated - and completely overlooked - combination of circumstances, your program director retires, and replaced by a True Believer in the Cult of AGI. By the time you realize that the recent policy changes not only don't make sense, but are actively undermining the safety controls on the AGIs interaction with the outside world, it's too late, and the New World Order is trampling you under its metal foot.
Which is different from a regular cult of new world order how?
__________________
Master of the Shining Darkness

"My views are nonsense. So what?" - BobTheCoward


Belz... is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old Yesterday, 02:34 PM   #177
ServiceSoon
Graduate Poster
 
ServiceSoon's Avatar
 
Join Date: Oct 2007
Posts: 1,427
Originally Posted by Dr.Sid View Post
Technology is the problem. It gives you the potential for destruction. Our potential for destruction grows every year. Your wisdom stays constant.
Of course this whole AI discussion assumes we do not wipe ourselves, before AI becomes smarter than us. Yes, it might never happen, but it's argument outside the discussion.
Wasn't that the Uni-bomber's basic premise? Check out his manifesto.
ServiceSoon is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Reply

International Skeptics Forum » General Topics » Religion and Philosophy

Bookmarks

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump


All times are GMT -7. The time now is 01:18 AM.
Powered by vBulletin. Copyright ©2000 - 2019, Jelsoft Enterprises Ltd.

This forum began as part of the James Randi Education Foundation (JREF). However, the forum now exists as
an independent entity with no affiliation with or endorsement by the JREF, including the section in reference to "JREF" topics.

Disclaimer: Messages posted in the Forum are solely the opinion of their authors.