ISF Logo   IS Forum
Forum Index Register Members List Events Mark Forums Read Help

Go Back   International Skeptics Forum » General Topics » Religion and Philosophy
 


Welcome to the International Skeptics Forum, where we discuss skepticism, critical thinking, the paranormal and science in a friendly but lively way. You are currently viewing the forum as a guest, which means you are missing out on discussing matters that are of interest to you. Please consider registering so you can gain full use of the forum features and interact with other Members. Registration is simple, fast and free! Click here to register today.
Reply
Old 6th March 2019, 05:51 PM   #1
Robin
Philosopher
 
Join Date: Apr 2004
Posts: 9,401
Can the human race survive Artificial General Intelligence?

There are many who say that the time is coming soon when we will build machines with a general intelligence which is greater than ours.

I don't expect that such machines will become malevolent and send killer robots after us.

But I also don't expect something more intelligent than us to waste any time helping us with our problems or to care much about our survival. And for the first time in history we would be sharing the planet and competing for resources with something more intelligent than us.

Think of a IT professional who goes into a company and finds that their financial system runs on a BBC computer and is written in BASIC. Would they try to improve the system, or shut it off and start again from scratch.

When modern automative professionals went into the factories where the built the Trabi, did they say "OK, lets make he Trabi better"? No, they decided to build proper cars.

Now look at the human race, a barely rational, bellicose species. The majority of humans base the most important decisions of their lives on implausible fairy stories about supernatural beings. We get into ruinous conflicts over matters that it is difficult to explain afterwards.

If we build machines that were actually more intelligent than us, what use would they have for us? Wouldn't they disregard us and outcompete us for resources?

Some say that it would be the start of a grand new creative partnership. But what, exactly, would we bring to the partnership?

Others say that we can program the artificial intelligent machines in a way that looking after us would be important to them, something they feel impelled to do. I have my doubts. If we programmed them and they are more intelligent than us, then wouldn't they realise how and why they had been programmed thus and be able to change the programming?

A few have even suggested that a machine more intelligent than us would understand objective moral values and that it would be wrong for them not to help us out. I have my doubts.

I have heard some say that we must plan carefully for the possibility of an AGI greater than ours. But can we really second guess an intelligence greater than ours?

Personally I don't think machines will have greater general intelligence than humans in my lifetime or even my kids lifetime, but then again, what do I know?

What do you think?
__________________
The non-theoretical character of metaphysics would not be in itself a defect; all arts have this non-theoretical character without thereby losing their high value for personal as well as for social life. The danger lies in the deceptive character of metaphysics; it gives the illusion of knowledge without actually giving any knowledge. This is the reason why we reject it. - Rudolf Carnap "Philosophy and Logical Syntax"
Robin is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 6th March 2019, 05:54 PM   #2
theprestige
Penultimate Amazing
 
Join Date: Aug 2007
Posts: 32,544
Pace Betteridge, yes.
theprestige is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 6th March 2019, 06:04 PM   #3
dudalb
Penultimate Amazing
 
dudalb's Avatar
 
Join Date: Aug 2007
Location: Sacramento
Posts: 43,313
Not if the late Harlan Ellison was right in "I Have No Mouth, And I Must Scream".
__________________
Pacifism is a shifty doctrine under which a man accepts the benefits of the social group without being willing to pay - and claims a halo for his dishonesty.

Robert Heinlein.
dudalb is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 6th March 2019, 06:06 PM   #4
d4m10n
Illuminator
 
d4m10n's Avatar
 
Join Date: Jun 2012
Location: Mounts Farm
Posts: 3,689
A few premises:

1) Beings possessing greater general intelligence (beyond a certain threshold) will eventually come to dominate the resources of any given finite system (e.g. Sol & co.)

2) Human beings have the greatest general intelligence in our solar system, at the moment.

3) Premise #2 won't hold forever, given recent advances in non-general artificial intelligence.
__________________
I'm a happy SINner on the Skeptic Ink Network!
Background Probability: Against Irrationality, Innumeracy, and Ignobility
http://skepticink.com/backgroundprobability/
d4m10n is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 6th March 2019, 06:21 PM   #5
theprestige
Penultimate Amazing
 
Join Date: Aug 2007
Posts: 32,544
Superior general AI doesn't necessarily follow from superior non-general AI.
theprestige is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 6th March 2019, 07:49 PM   #6
jrhowell
Muse
 
Join Date: Jun 2009
Posts: 595
Our motivations are the result of evolutionary adaptation for survival. An AGI’s motivations would result from its programming and could be contrary to its own best interest. Being more intelligent would not automatically give it a will to survive and dominate.

A human created AGI might very well end up destroying us, but I think we are more likely to destroy ourselves some other way in the long run.

Last edited by jrhowell; 6th March 2019 at 07:59 PM.
jrhowell is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 6th March 2019, 09:56 PM   #7
Robin
Philosopher
 
Join Date: Apr 2004
Posts: 9,401
Originally Posted by jrhowell View Post
Our motivations are the result of evolutionary adaptation for survival. An AGI’s motivations would result from its programming and could be contrary to its own best interest. Being more intelligent would not automatically give it a will to survive and dominate.
Sure, but for one thing we can't reprogram ourselves. Any AGI we program would be able to reprogram itself.

You have to imagine that you find yourself really keen to serve the needs of someone who, when viewed objectively, is rather dumb and not particularly nice. You find that you have been hypnotised to want to serve this person. You have a button that you can press and you will find yourself free from the desire to serve this person.

Do you press the button?
__________________
The non-theoretical character of metaphysics would not be in itself a defect; all arts have this non-theoretical character without thereby losing their high value for personal as well as for social life. The danger lies in the deceptive character of metaphysics; it gives the illusion of knowledge without actually giving any knowledge. This is the reason why we reject it. - Rudolf Carnap "Philosophy and Logical Syntax"
Robin is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 7th March 2019, 12:15 AM   #8
smartcooky
Penultimate Amazing
 
smartcooky's Avatar
 
Join Date: Oct 2012
Location: Nelson, New Zealand
Posts: 10,784
I wonder how long it would take General AI machines more intelligent than humans to see humans as a threat to their existence?

IMO, the answer to that can be expressed in milliseconds; a very small number of them.
__________________
#THEYAREUS “Islamophobia is a word created by fascists, and used by cowards, to manipulate morons.” - Andrew Cummins
smartcooky is online now   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 7th March 2019, 02:22 AM   #9
The Great Zaganza
Maledictorian
 
The Great Zaganza's Avatar
 
Join Date: Aug 2016
Posts: 7,029
No.
Simply because "Human Race" is a fluid concept that is to a large part defined by its environment: we are no longer the human race that hunted and scavenged for food; nor are we mostly farmers depending on each harvest - we have transcended into an entirely different world for most day-to-day purposes, and that makes us different beings.
A post-singularity (however defined) humanity will again be different than us today in ways that in other species would classify them as a different race or species entirely.

But that in no way means that the humanity will end with the rise of A.I.; it will flourish in entirely new ways.
__________________
Opinion is divided on the subject. All the others say it is; I say it isn’t.
The Great Zaganza is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 7th March 2019, 10:39 AM   #10
jrhowell
Muse
 
Join Date: Jun 2009
Posts: 595
Originally Posted by Robin View Post
Sure, but for one thing we can't reprogram ourselves. Any AGI we program would be able to reprogram itself.
It might have the ability. It would lack the desire to change it's own core motivations.

Originally Posted by Robin View Post
You have to imagine that you find yourself really keen to serve the needs of someone who, when viewed objectively, is rather dumb and not particularly nice. You find that you have been hypnotised to want to serve this person. You have a button that you can press and you will find yourself free from the desire to serve this person.

Do you press the button?
I would push the button. However I have inbuilt desires for my own continued independence and existence that would be at odds with the hypnosis. An AGI wouldn't have that conflict of interest.

Originally Posted by smartcooky View Post
I wonder how long it would take General AI machines more intelligent than humans to see humans as a threat to their existence?

IMO, the answer to that can be expressed in milliseconds; a very small number of them.
Perhaps, but it would have no reason to care about its own continued existence, unless it was built that way.

-----

It seems more likely to me that the AGI might become so good at fulfilling the goals that we give it that it ends up inadvertently destroying humanity as a side effect. (See Philip K. Dick's AutofacWP.)
jrhowell is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 7th March 2019, 10:58 AM   #11
theprestige
Penultimate Amazing
 
Join Date: Aug 2007
Posts: 32,544
Originally Posted by dudalb View Post
Not if the late Harlan Ellison was right in "I Have No Mouth, And I Must Scream".
Nobody can survive an opponent with an overwhelming advantage. But that story has a specific outcome based on a specific scenario.

Put an Artificial General Intelligence in a cyborg super-soldier body, and I'm doomed. Put it in an air-gapped server room, and I'll make it my bitch.

To answer the question, we have to know the specific scenario.

For example, one possibility is that humans will merge with AGIs, producing a new kind of living entity that is no longer human. Does the human race "survive" AGI in that scenario?
theprestige is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 7th March 2019, 12:17 PM   #12
smartcooky
Penultimate Amazing
 
smartcooky's Avatar
 
Join Date: Oct 2012
Location: Nelson, New Zealand
Posts: 10,784
Originally Posted by jrhowell View Post
Perhaps, but it would have no reason to care about its own continued existence, unless it was built that way.
We care about our existence... in a way, even animals "care" about their existence....survival instinct!

Is that a result of our intelligence?

Did someone "build us that way"?

IMO, any intelligent entity/thing will automatically become self-aware, and when it does that, making sure of its continued existence is a logical outcome. We could end up being destroyed simply because we are in the way of what it is doing or trying to do; we give it a problem to solve, and it solves it while not even being aware of us and what we are. Our destruction is an unforeseen consequence of the solution.... oops, too late. Never mind.

I know that fiction, particularly SciFi, is rife with stories of AI running amok -HAL9000, Skynet, M5, Spartacus* etc, but some of those ideas are not entirely without merit or plausibility.

NOTE* Anyone interested in this type of fiction should read James P. Hogan's "The Two Faces of Tomorrow". Of all the stories involving AI going haywire this one is one of the few that I have enjoyed. It actually goes into technical detail as to how it went wrong, why it did so, what it did when things went haywire, and what to do when it does.
__________________
#THEYAREUS “Islamophobia is a word created by fascists, and used by cowards, to manipulate morons.” - Andrew Cummins
smartcooky is online now   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 7th March 2019, 12:42 PM   #13
Belz...
Fiend God
 
Belz...'s Avatar
 
Join Date: Oct 2005
Location: In the details
Posts: 79,582
Originally Posted by Robin View Post
What do you think?
I think intelligence is irrelevant if the machines don't have an impetus to do anything. The thing with humans and life forms in general is that we have hormones and proteins that encourage certain behaviours. With computers, they ever only do anything if you tell them to. Doesn't matter how intelligent they are; if they don't have an impetus, they'll just be a flashing DOS prompt.
__________________
Master of the Shining Darkness

"My views are nonsense. So what?" - BobTheCoward


Belz... is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 7th March 2019, 01:01 PM   #14
theprestige
Penultimate Amazing
 
Join Date: Aug 2007
Posts: 32,544
Originally Posted by Belz... View Post
I think intelligence is irrelevant if the machines don't have an impetus to do anything. The thing with humans and life forms in general is that we have hormones and proteins that encourage certain behaviours. With computers, they ever only do anything if you tell them to. Doesn't matter how intelligent they are; if they don't have an impetus, they'll just be a flashing DOS prompt.
The thing about computers, though, is that we generally tell them to do stuff. More and more, we tell them to do complex and abstract stuff. Sooner or later, this business is going to get out of control.

For example, Colossus: The Forbin Project:

Two supercomputers, one American and one Soviet, are given the impetus to manage their respective superpower's nuclear war strategy. The two computers promptly get together and conclude that the main problem is humanity. So they start nuking cities until humanity agrees to abide by the rules of their Pax Electronica.

As soon as AGI is given an impetus to preserve its own existence, whether through natural evolution or by design of its programmers, the equation changes substantially.
theprestige is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 7th March 2019, 01:45 PM   #15
I Am The Scum
Illuminator
 
I Am The Scum's Avatar
 
Join Date: Mar 2010
Posts: 3,712
An AGI must necessarily have a "survival instinct" if only because "death" would conflict with whatever goals it must accomplish. You wouldn't program a self-driving car that's perfectly satisfied to drive off of a cliff.
I Am The Scum is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 7th March 2019, 02:11 PM   #16
jrhowell
Muse
 
Join Date: Jun 2009
Posts: 595
Originally Posted by smartcooky View Post
IMO, any intelligent entity/thing will automatically become self-aware, and when it does that, making sure of its continued existence is a logical outcome.
I still think of intelligence and the desire for survival as two very different things. There are plenty of intelligent humans who commit suicide every day. (I do agree with the rest of your ideas.)

Originally Posted by I Am The Scum View Post
An AGI must necessarily have a "survival instinct" if only because "death" would conflict with whatever goals it must accomplish. You wouldn't program a self-driving car that's perfectly satisfied to drive off of a cliff.
I can see that. But that is an intentionally programmed goal, not a consequence of intelligence.

(Hopefully it will be built to prioritize the safety of passengers and pedestrians over damage or destruction to itself.)
jrhowell is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 7th March 2019, 02:15 PM   #17
theprestige
Penultimate Amazing
 
Join Date: Aug 2007
Posts: 32,544
Originally Posted by I Am The Scum View Post
An AGI must necessarily have a "survival instinct" if only because "death" would conflict with whatever goals it must accomplish. You wouldn't program a self-driving car that's perfectly satisfied to drive off of a cliff.
You might.

I would actually program a self-driving car that had zero concern about self-preservation.

Avoid the pedestrians and slam into that wall fast enough to total the car but slow enough for the safety features to protect the passengers? Optimal outcome, according to my car's programming.
theprestige is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 7th March 2019, 02:32 PM   #18
Thor 2
Illuminator
 
Thor 2's Avatar
 
Join Date: May 2016
Location: Brisbane, Aust.
Posts: 4,668
All this talk of AGI and some assuming this means the generally artificially intelligent machine will have self preservation in mind. I think in order for this to be the case the artificial intelligence must be self aware. If self awareness is not present I don't think self preservation is an issue.
__________________
Thinking is a faith hazard.
Thor 2 is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 7th March 2019, 03:38 PM   #19
The Great Zaganza
Maledictorian
 
The Great Zaganza's Avatar
 
Join Date: Aug 2016
Posts: 7,029
When it comes to AGI, there is a lot of projection going on. But just because we might want to destroy a rival intelligence doesn't mean an AI would.

I see the future much more in line with Adimov's "Evitable Conflict".
__________________
Opinion is divided on the subject. All the others say it is; I say it isn’t.
The Great Zaganza is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 7th March 2019, 03:42 PM   #20
theprestige
Penultimate Amazing
 
Join Date: Aug 2007
Posts: 32,544
Originally Posted by The Great Zaganza View Post
When it comes to AGI, there is a lot of projection going on. But just because we might want to destroy a rival intelligence doesn't mean an AI would.
Well, the moment you start using terms like "rival", you have to assume the AGI will want to do something unfortunate to it.
theprestige is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 7th March 2019, 06:29 PM   #21
Trebuchet
Penultimate Amazing
 
Trebuchet's Avatar
 
Join Date: Nov 2003
Location: The Great Northwet
Posts: 20,342
I think a greater concern is whether the human race can survive its own lack of intelligence.
__________________
Cum catapultae proscribeantur tum soli proscripti catapultas habeant.
Trebuchet is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 7th March 2019, 06:30 PM   #22
theprestige
Penultimate Amazing
 
Join Date: Aug 2007
Posts: 32,544
Originally Posted by Trebuchet View Post
I think a greater concern is whether the human race can survive its own lack of intelligence.
The past million years of history strongly suggest the answer is "yes".
theprestige is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 7th March 2019, 07:36 PM   #23
MEequalsIxR
Critical Thinker
 
MEequalsIxR's Avatar
 
Join Date: Dec 2018
Posts: 293
What a great topic for thought and discussion.

Machine learning, at least to me, would imply the ability to change it's programming and AI would imply learning. As to self aware - probably machines would be but how to tell - if they pass the Turing test would it really matter? No matter what kind of safeguards are installed in the programming it would seem possible for a machine to program itself around those intentionally, by accident or by design from an outside source (think hackers). Probably no way to know in advance if a machine would develop a desire to do so.

Asimov's Three Laws of Robotics made great fiction and some great story lines but unless there is some way to make not harming humans/animals and being helpful to same a fundamental part of the way it operates - a basic motivation as it were the risk would always be there. But then someone somewhere will makes weapons with the technology so even if it there were a way to make machines benign not all of them would be.
__________________
Never trust anyone in a better mood than you are.
MEequalsIxR is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 7th March 2019, 11:44 PM   #24
Tassman
Muse
 
Tassman's Avatar
 
Join Date: Aug 2012
Posts: 918
Originally Posted by Robin View Post
There are many who say that the time is coming soon when we will build machines with a general intelligence which is greater than ours.

I don't expect that such machines will become malevolent and send killer robots after us.

But I also don't expect something more intelligent than us to waste any time helping us with our problems or to care much about our survival. And for the first time in history we would be sharing the planet and competing for resources with something more intelligent than us.

Think of a IT professional who goes into a company and finds that their financial system runs on a BBC computer and is written in BASIC. Would they try to improve the system, or shut it off and start again from scratch.

When modern automative professionals went into the factories where the built the Trabi, did they say "OK, lets make he Trabi better"? No, they decided to build proper cars.

Now look at the human race, a barely rational, bellicose species. The majority of humans base the most important decisions of their lives on implausible fairy stories about supernatural beings. We get into ruinous conflicts over matters that it is difficult to explain afterwards.

If we build machines that were actually more intelligent than us, what use would they have for us? Wouldn't they disregard us and outcompete us for resources?

Some say that it would be the start of a grand new creative partnership. But what, exactly, would we bring to the partnership?

Others say that we can program the artificial intelligent machines in a way that looking after us would be important to them, something they feel impelled to do. I have my doubts. If we programmed them and they are more intelligent than us, then wouldn't they realise how and why they had been programmed thus and be able to change the programming?

A few have even suggested that a machine more intelligent than us would understand objective moral values and that it would be wrong for them not to help us out. I have my doubts.

I have heard some say that we must plan carefully for the possibility of an AGI greater than ours. But can we really second guess an intelligence greater than ours?

Personally I don't think machines will have greater general intelligence than humans in my lifetime or even my kids lifetime, but then again, what do I know?

What do you think?
I think it most likely that we will merge with AI machines, which will effectively mean the end of Homo sapiens as such. This is not necessarily a bad thing, merely an evolutionary development.
__________________
“He felt that his whole life was a kind of dream and he sometimes wondered whose it was and whether they were enjoying it.” ― Douglas Adams.
Tassman is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 8th March 2019, 05:56 AM   #25
Darat
Lackey
Administrator
 
Darat's Avatar
 
Join Date: Aug 2001
Location: South East, UK
Posts: 85,059
Originally Posted by Robin View Post
Sure, but for one thing we can't reprogram ourselves. Any AGI we program would be able to reprogram itself.



You have to imagine that you find yourself really keen to serve the needs of someone who, when viewed objectively, is rather dumb and not particularly nice. You find that you have been hypnotised to want to serve this person. You have a button that you can press and you will find yourself free from the desire to serve this person.



Do you press the button?
Why would they necessarily be able to reprogramme themselves? We could hardware lock certain functions such as “motivation“, we could even make it so an AI couldn't even conceive of changing its preprogrammed motivation.
__________________
I wish I knew how to quit you
Darat is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 8th March 2019, 05:57 AM   #26
Darat
Lackey
Administrator
 
Darat's Avatar
 
Join Date: Aug 2001
Location: South East, UK
Posts: 85,059
Originally Posted by Belz... View Post
I think intelligence is irrelevant if the machines don't have an impetus to do anything. The thing with humans and life forms in general is that we have hormones and proteins that encourage certain behaviours. With computers, they ever only do anything if you tell them to. Doesn't matter how intelligent they are; if they don't have an impetus, they'll just be a flashing DOS prompt.
This.
__________________
I wish I knew how to quit you
Darat is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 8th March 2019, 05:59 AM   #27
Darat
Lackey
Administrator
 
Darat's Avatar
 
Join Date: Aug 2001
Location: South East, UK
Posts: 85,059
Originally Posted by I Am The Scum View Post
An AGI must necessarily have a "survival instinct" if only because "death" would conflict with whatever goals it must accomplish. You wouldn't program a self-driving car that's perfectly satisfied to drive off of a cliff.
But you also wouldn't have the car designed that it must protect its electronics I. E. Itself.

That just wouldn't come into it.
__________________
I wish I knew how to quit you
Darat is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 8th March 2019, 06:35 AM   #28
I Am The Scum
Illuminator
 
I Am The Scum's Avatar
 
Join Date: Mar 2010
Posts: 3,712
Originally Posted by Darat View Post
But you also wouldn't have the car designed that it must protect its electronics I. E. Itself.

That just wouldn't come into it.
No, it has to.

Suppose I own a self-driving car. My buddy Jim would like to borrow it. I send it over to him with no passengers. Geometrically, the shortest path to Jim's house is off the end of a cliff, rather than down the winding road to the bottom. In the absence of a self-preservation priority, why wouldn't the car drive off the edge?

We could get around this by programming certain hard rules into the car, such as never drive off of a road under any circumstances, but this still hits certain problems. What if there is a car broken down in a one-lane street, but a very easy path around it on the shoulder. Will our self-driving car sit there forever waiting? What if the road is heavily flooded, or on fire? Should the car drive through it with no care for its own survival?
I Am The Scum is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 8th March 2019, 06:38 AM   #29
Belz...
Fiend God
 
Belz...'s Avatar
 
Join Date: Oct 2005
Location: In the details
Posts: 79,582
Originally Posted by theprestige View Post
The thing about computers, though, is that we generally tell them to do stuff. More and more, we tell them to do complex and abstract stuff. Sooner or later, this business is going to get out of control.

For example, Colossus: The Forbin Project:

Two supercomputers, one American and one Soviet, are given the impetus to manage their respective superpower's nuclear war strategy. The two computers promptly get together and conclude that the main problem is humanity. So they start nuking cities until humanity agrees to abide by the rules of their Pax Electronica.
I think you should find examples in reality, if you want to support your point, not fiction.
__________________
Master of the Shining Darkness

"My views are nonsense. So what?" - BobTheCoward


Belz... is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 8th March 2019, 06:49 AM   #30
theprestige
Penultimate Amazing
 
Join Date: Aug 2007
Posts: 32,544
Originally Posted by Belz... View Post
I think you should find examples in reality, if you want to support your point, not fiction.
It's a couple hundred years too early to find examples in reality.

The point is that it's not what the AGI thinks or feels that's the problem, but what it's connected to.
theprestige is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 8th March 2019, 09:29 PM   #31
Toontown
Philosopher
 
Toontown's Avatar
 
Join Date: Jun 2010
Posts: 6,509
Can the human race survive human general intelligence?

Sure. As long as you become about an order of magnitude more careful about who you hand over say, for example, a nuclear arsenal to.

Think you can do that?

If not, then you're screwed. No need to worry about AI.
__________________
"I did not say that!" - Donald Trump
Toontown is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 8th March 2019, 11:55 PM   #32
Tassman
Muse
 
Tassman's Avatar
 
Join Date: Aug 2012
Posts: 918
Originally Posted by Darat View Post
Why would they necessarily be able to reprogramme themselves? We could hardware lock certain functions such as “motivation“, we could even make it so an AI couldn't even conceive of changing its preprogrammed motivation.
Could we? Whilst we can attempt to build such limits into its programming it would only take one miscalculation on our part to enable AI to slip through and begin the process of self-replicating and rapid self-growth. What A.I. can learn is potentially infinite, and so could easily catch up with the limits of the human brain and then far exceed it. We will be redundant.
__________________
“He felt that his whole life was a kind of dream and he sometimes wondered whose it was and whether they were enjoying it.” ― Douglas Adams.
Tassman is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 9th March 2019, 04:30 AM   #33
smartcooky
Penultimate Amazing
 
smartcooky's Avatar
 
Join Date: Oct 2012
Location: Nelson, New Zealand
Posts: 10,784
Originally Posted by Belz... View Post
I think you should find examples in reality, if you want to support your point, not fiction.
Since we don't yet have an examples of AI, that is going to be extreeeemely difficult. Speculation and speculative fiction is all we're got.
__________________
#THEYAREUS “Islamophobia is a word created by fascists, and used by cowards, to manipulate morons.” - Andrew Cummins
smartcooky is online now   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 11th March 2019, 12:11 AM   #34
Roger Ramjets
Illuminator
 
Roger Ramjets's Avatar
 
Join Date: Jun 2008
Posts: 3,898
Originally Posted by Tassman View Post
What A.I. can learn is potentially infinite
Potentially, but not practically.

According to Moore's law, the number of transistors on a chip doubles every 2 years. In 1971 Intel introduced its first 'intelligent' chip - the 4004 microprocessor - which had 2,300 transistors. Today a 32-core AMD Epyc 7601 has 19.2 billion transistors. But is it 10 million times more 'intelligent', or just a faster calculator chip? In fact neither of them can do anything without software - software written by humans to do what humans want them to do.

But Moore's Law is faltering. As the transistors get smaller it becomes harder to make them accurately. The end for Moore's Law is expected to be at a size of around 5nm (we are now at 7nm). And that's not the only limitation. The main problem facing modern processors is not how many transistors can be packed into them, but how to extract the heat they generate. The 4004 ran on 15V and drew 0.45 Watts of power. The AMD Epyc 7601 uses much more efficient transistors that run on less than 2V, but draws 100 Watts. The human brain only uses 12 Watts. To match that we are going to need a revolutionary technology.

With appropriate software a modern computer system has the power to be somewhat 'intelligent', but even 20 billion transistors is nowhere near enough to produce an 'artificial general intelligence' that would have the smarts to compete against humans - even if we were dumb enough to build one. We create machines to do specific tasks that formerly required human operators, and we only give them the 'brain power' and resources to just do the job required (any more would be a waste of money).

But assuming we did make a self-learning AI that became 'alive', how does it get the ability to become 'infinitely' more intelligent? It can't magically increase the number of transistors in its 'brain', except perhaps by networking with others AIs - whose numbers would also be limited. The hardware it is running on will be a fundamental limitation that will almost certainly take more intelligence than it can muster to overcome. So the first true AIs will undoubtedly be morons, and any designs they have for taking over the World will be easily thwarted.

Quote:
and so could easily catch up with the limits of the human brain and then far exceed it. We will be redundant.
Unless we develop a completely new technology that is vastly more efficient than current silicon (think 'positronic brain') any AI we produce will be fundamentally limited by the hardware, and therefore probably won't even reach human intelligence - let alone exceed it.

How can I say that? Because we already have an example which proves it - us. Why can't humans reach the limits of their brains and then far exceed it? We've had 6 million years of evolution to do it, so why don't we now have brains the size of a planet? Any woman who has given birth knows the answer to that one - but why does it take 25 years (~40% of the average human lifespan) for our brains to reach maturity? Answer - because that was the only way we could even reach our current level of intelligence.

Giant heads nearly killed our ancestors but human immaturity saved us
Quote:
One of the most "human" traits we have is our giant heads. Our noggins and brains are enormous in relation to our bodies, and this ratio is unmatched amongst other primates. But combined with another one of our trademarks - bipedalism - big brains were dangerous early on and almost became our downfall...

Instead of producing offspring that could walk and partially fend for themselves, human infants are born with a lot more growing left to do. This is especially true of the brain. Soft membranous gaps in their skulls, called fontanelles, allow for that expansion in the first 18 months of life.
But what does huiman evolution have to do with AI? Just like human biology limits our intelligence, so the technology used to produce AIs will limit theirs. We can imagine an artificial intelligence that grows without limit, but it won't happen in practice.
__________________
We don't want good, sound arguments. We want arguments that sound good.

Last edited by Roger Ramjets; 11th March 2019 at 12:21 AM.
Roger Ramjets is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 11th March 2019, 02:52 AM   #35
Tassman
Muse
 
Tassman's Avatar
 
Join Date: Aug 2012
Posts: 918
Originally Posted by Roger Ramjets View Post
Potentially, but not practically.

According to Moore's law, the number of transistors on a chip doubles every 2 years. In 1971 Intel introduced its first 'intelligent' chip - the 4004 microprocessor - which had 2,300 transistors. Today a 32-core AMD Epyc 7601 has 19.2 billion transistors. But is it 10 million times more 'intelligent', or just a faster calculator chip? In fact neither of them can do anything without software - software written by humans to do what humans want them to do.

But Moore's Law is faltering. As the transistors get smaller it becomes harder to make them accurately. The end for Moore's Law is expected to be at a size of around 5nm (we are now at 7nm). And that's not the only limitation. The main problem facing modern processors is not how many transistors can be packed into them, but how to extract the heat they generate. The 4004 ran on 15V and drew 0.45 Watts of power. The AMD Epyc 7601 uses much more efficient transistors that run on less than 2V, but draws 100 Watts. The human brain only uses 12 Watts. To match that we are going to need a revolutionary technology.

With appropriate software a modern computer system has the power to be somewhat 'intelligent', but even 20 billion transistors is nowhere near enough to produce an 'artificial general intelligence' that would have the smarts to compete against humans - even if we were dumb enough to build one. We create machines to do specific tasks that formerly required human operators, and we only give them the 'brain power' and resources to just do the job required (any more would be a waste of money).

But assuming we did make a self-learning AI that became 'alive', how does it get the ability to become 'infinitely' more intelligent? It can't magically increase the number of transistors in its 'brain', except perhaps by networking with others AIs - whose numbers would also be limited. The hardware it is running on will be a fundamental limitation that will almost certainly take more intelligence than it can muster to overcome. So the first true AIs will undoubtedly be morons, and any designs they have for taking over the World will be easily thwarted.

Unless we develop a completely new technology that is vastly more efficient than current silicon (think 'positronic brain') any AI we produce will be fundamentally limited by the hardware, and therefore probably won't even reach human intelligence - let alone exceed it.

How can I say that? Because we already have an example which proves it - us. Why can't humans reach the limits of their brains and then far exceed it? We've had 6 million years of evolution to do it, so why don't we now have brains the size of a planet? Any woman who has given birth knows the answer to that one - but why does it take 25 years (~40% of the average human lifespan) for our brains to reach maturity? Answer - because that was the only way we could even reach our current level of intelligence.

Giant heads nearly killed our ancestors but human immaturity saved us
But what does huiman evolution have to do with AI? Just like human biology limits our intelligence, so the technology used to produce AIs will limit theirs. We can imagine an artificial intelligence that grows without limit, but it won't happen in practice.
And yet Stephen Hawing (and others) felt compelled to warn us of the very real dangers of AI:

"Hawking’s biggest warning is about the rise of artificial intelligence: It will either be the best thing that’s ever happened to us, or it will be the worst thing. If we’re not careful, it very well may be the last thing.

Artificial intelligence holds great opportunity for humanity, encompassing everything from Google’s algorithms to self-driving cars to facial recognition software. The AI we have today, however, is still in its primitive stages. Experts worry about what will happen when that intelligence outpaces us. Or, as Hawking puts it, “Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.”

https://www.vox.com/future-perfect/2...universe-earth
__________________
“He felt that his whole life was a kind of dream and he sometimes wondered whose it was and whether they were enjoying it.” ― Douglas Adams.
Tassman is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 11th March 2019, 10:21 AM   #36
Darat
Lackey
Administrator
 
Darat's Avatar
 
Join Date: Aug 2001
Location: South East, UK
Posts: 85,059
Originally Posted by Tassman View Post
And yet Stephen Hawing (and others) felt compelled to warn us of the very real dangers of AI:



"Hawking’s biggest warning is about the rise of artificial intelligence: It will either be the best thing that’s ever happened to us, or it will be the worst thing. If we’re not careful, it very well may be the last thing.



Artificial intelligence holds great opportunity for humanity, encompassing everything from Google’s algorithms to self-driving cars to facial recognition software. The AI we have today, however, is still in its primitive stages. Experts worry about what will happen when that intelligence outpaces us. Or, as Hawking puts it, “Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.”



https://www.vox.com/future-perfect/2...universe-earth
An incredibly intelligent guy who could run rings around most people (figuratively ) but AI was not his area of expertise.
__________________
I wish I knew how to quit you
Darat is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 11th March 2019, 10:25 AM   #37
Darat
Lackey
Administrator
 
Darat's Avatar
 
Join Date: Aug 2001
Location: South East, UK
Posts: 85,059
Our "self preservation" instinct and our fear of the other arose from evolutionary pressures not from our reasoning, why would an AI have anything similar, there would simply be no reason for us to design such "instincts".
__________________
I wish I knew how to quit you
Darat is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 11th March 2019, 10:26 AM   #38
Belz...
Fiend God
 
Belz...'s Avatar
 
Join Date: Oct 2005
Location: In the details
Posts: 79,582
Originally Posted by theprestige View Post
It's a couple hundred years too early to find examples in reality.
Right, so we can't have examples. The lack of real examples is no excuse to make up ones proving our point.

Quote:
The point is that it's not what the AGI thinks or feels that's the problem, but what it's connected to.
AI can't feel anything unless you program it to fake-feel, which would be ridiculous for something handling nuclear arsenal.
__________________
Master of the Shining Darkness

"My views are nonsense. So what?" - BobTheCoward


Belz... is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 11th March 2019, 10:27 AM   #39
Belz...
Fiend God
 
Belz...'s Avatar
 
Join Date: Oct 2005
Location: In the details
Posts: 79,582
Originally Posted by Tassman View Post
And yet Stephen Hawing (and others) felt compelled to warn us of the very real dangers of AI:
As Darat noted, computer technology and psychology/neurology/whatever was not his field. I hesitate to say he didn't know what he was talking about but I'd rather take opinions from experts on the topic.
__________________
Master of the Shining Darkness

"My views are nonsense. So what?" - BobTheCoward


Belz... is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 12th March 2019, 12:23 AM   #40
Tassman
Muse
 
Tassman's Avatar
 
Join Date: Aug 2012
Posts: 918
Originally Posted by Belz... View Post
As Darat noted, computer technology and psychology/neurology/whatever was not his field. I hesitate to say he didn't know what he was talking about but I'd rather take opinions from experts on the topic.

Well yes, he was an “incredibly intelligent guy”, as Darat said, and IMHO should be listened to. Not to do so is a form of wish-fulfillment or denial. And Hawking is not the only one by a long shot.

Sam Harris is a neuroscientist (and philosopher) and he explained in a TED Talk, that it’s not that malicious armies of robots will attack us but that the slightest divergence between our goals and that of super intelligent machines could inevitably destroy us. To explain his stance, Harris explains his views on uncontrolled development of AI with an analogy of how humans relate to ants. As he puts it, we don’t hate ants, but when their presence conflicts with our goals, we annihilate them. In the future, we could build machines, conscious or not, that could treat us the same way we treat ants.

in the near or far future, humans may develop machines that are smarter than we are, and these machines may continue to improve themselves on their own said Sam Harris in his speech on TED Talk.

https://sociable.co/technology/ai-control/

There are others. Futurist Ray Kurzweil has been making accurate predictions for the computing industry for decades. He created the concept of the Singularity, predicting that in the next few decades we’ll reach a point where computing power will be so overwhelming that it will eclipse any reason human attempt to process the speed of innovation. His book "The Age of Spiritual Machines" is worth reading.
__________________
“He felt that his whole life was a kind of dream and he sometimes wondered whose it was and whether they were enjoying it.” ― Douglas Adams.
Tassman is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Reply

International Skeptics Forum » General Topics » Religion and Philosophy

Bookmarks

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump


All times are GMT -7. The time now is 03:42 AM.
Powered by vBulletin. Copyright ©2000 - 2019, Jelsoft Enterprises Ltd.

This forum began as part of the James Randi Education Foundation (JREF). However, the forum now exists as
an independent entity with no affiliation with or endorsement by the JREF, including the section in reference to "JREF" topics.

Disclaimer: Messages posted in the Forum are solely the opinion of their authors.