ISF Logo   IS Forum
Forum Index Register Members List Events Mark Forums Read Help

Go Back   International Skeptics Forum » General Topics » Religion and Philosophy
 


Welcome to the International Skeptics Forum, where we discuss skepticism, critical thinking, the paranormal and science in a friendly but lively way. You are currently viewing the forum as a guest, which means you are missing out on discussing matters that are of interest to you. Please consider registering so you can gain full use of the forum features and interact with other Members. Registration is simple, fast and free! Click here to register today.
Reply
Old 12th March 2019, 04:34 AM   #41
Puppycow
Penultimate Amazing
 
Puppycow's Avatar
 
Join Date: Jan 2003
Posts: 23,588
The AI would have whatever motivations and rules of behavior we program it to have. So unless we are very careless about what we program it to do, it shouldn't be a problem.
__________________
A fool thinks himself to be wise, but a wise man knows himself to be a fool.
William Shakespeare
Puppycow is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 12th March 2019, 04:51 AM   #42
Belz...
Fiend God
 
Belz...'s Avatar
 
Join Date: Oct 2005
Location: In the details
Posts: 82,838
Originally Posted by Tassman View Post
Sam Harris is a neuroscientist (and philosopher) and he explained in a TED Talk, that it’s not that malicious armies of robots will attack us but that the slightest divergence between our goals and that of super intelligent machines could inevitably destroy us. To explain his stance, Harris explains his views on uncontrolled development of AI with an analogy of how humans relate to ants. As he puts it, we don’t hate ants, but when their presence conflicts with our goals, we annihilate them.
But that's not even comparable. The goals of computers are OUR GOALS.
__________________
Master of the Shining Darkness

"My views are nonsense. So what?" - BobTheCoward


Belz... is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 12th March 2019, 08:15 AM   #43
I Am The Scum
Illuminator
 
I Am The Scum's Avatar
 
Join Date: Mar 2010
Posts: 3,941
You guys are assuming some flawless programmers.
I Am The Scum is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 12th March 2019, 08:45 AM   #44
Belz...
Fiend God
 
Belz...'s Avatar
 
Join Date: Oct 2005
Location: In the details
Posts: 82,838
Originally Posted by I Am The Scum View Post
You guys are assuming some flawless programmers.
Why do you say that?

Artificial intelligence doesn't mean that a computer has its own will and goals. Those need 'commands' to exist. I, in fact, am assuming that those commands would not be put into the system, and that has nothing to do with the quality of the programming.

I think some people here have very Hollywoodian conceptions of what AI is.
__________________
Master of the Shining Darkness

"My views are nonsense. So what?" - BobTheCoward


Belz... is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 12th March 2019, 09:16 AM   #45
The Great Zaganza
Maledictorian
 
The Great Zaganza's Avatar
 
Join Date: Aug 2016
Posts: 8,631
I'll always go with Kevin Kelly when it comes to predicting the future of technology.
__________________
Careful! That tree's bark is worse than its bite.
The Great Zaganza is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 12th March 2019, 10:44 AM   #46
I Am The Scum
Illuminator
 
I Am The Scum's Avatar
 
Join Date: Mar 2010
Posts: 3,941
Originally Posted by Belz... View Post
Why do you say that?

Artificial intelligence doesn't mean that a computer has its own will and goals. Those need 'commands' to exist. I, in fact, am assuming that those commands would not be put into the system, and that has nothing to do with the quality of the programming.

I think some people here have very Hollywoodian conceptions of what AI is.
The whole point of AI is that it does more than what is programmed into it. A chess AI doesn't play well because the programmer set a bunch of if/then commands (when the board looks like this move your queen here). Rather, it plays well because it is able to analyze possibilities and come up with its own strategies that are far more complex than any human could ever imagine.
I Am The Scum is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 12th March 2019, 10:47 AM   #47
Belz...
Fiend God
 
Belz...'s Avatar
 
Join Date: Oct 2005
Location: In the details
Posts: 82,838
Originally Posted by I Am The Scum View Post
The whole point of AI is that it does more than what is programmed into it. A chess AI doesn't play well because the programmer set a bunch of if/then commands (when the board looks like this move your queen here). Rather, it plays well because it is able to analyze possibilities and come up with its own strategies that are far more complex than any human could ever imagine.
That's all true but you're missing the point: an AI that, say, is designed specifically to drive a car won't suddenly develop a wish to trample pedestrians because it views humans as inefficient. That's completely outside of the scope of its algorithm. "Able to learn" doesn't mean it doesn't have boundaries. We're not talking about an AI that is designed to be a full person with no limits to its knowledge or opinions.
__________________
Master of the Shining Darkness

"My views are nonsense. So what?" - BobTheCoward


Belz... is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 12th March 2019, 01:01 PM   #48
MEequalsIxR
Critical Thinker
 
MEequalsIxR's Avatar
 
Join Date: Dec 2018
Posts: 473
Originally Posted by Puppycow View Post
The AI would have whatever motivations and rules of behavior we program it to have. So unless we are very careless about what we program it to do, it shouldn't be a problem.
I think it's just the opposite - no matter how careful we are there will always be unforeseen holes. Laws, rules and procedures are written to close loopholes and yet someone always seems to find one. Programs written for security are written to make data secure yet it is breached.

I just don't see how it's possible to build in safeguards that can not be worked around, bypassed or eliminated. Not even necessarily from a nefarious motivation but just as a means of doing something more efficiently or more directly or even more logically than originally programmed.

In the movie 2001 the HAL 9000 becomes homicidal not out of malice but a conflict between knowing the real purpose of the mission and having to lie to the crew members. The logic being if HAL killed the astronauts it would not have to lie to them. The story is of course fiction and not really all that likely - there's not much of a likelihood that aliens will create monoliths to terraform (their version) Jupiter or Saturn (depending on book or film) and we are not likely to mount a mission to investigate while withholding the true purpose from the crew sent to investigate - but the basic idea that some unknown conflict could result in unpredictable results is very believable. And intelligence means the ability to learn and learn things and new ways.

We frequently build things we loose control of or that operate in ways we didn't anticipate. Often it's just chalked up to operator error or a lack or ability of the operator and likely some percentage is just that. But how much is the creation just didn't operate within the parameters originally intended?

Ever see a cat staring at a pile of furniture trying to map the route to the top? They always seem to find a way.

When a machine is designed to think it's going to do that. If the designers knew what the answers to the problems the machine was going to generate the machine would not need to be.
__________________
Never trust anyone in a better mood than you are.

It's a sword they're not meant to safe.
MEequalsIxR is online now   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 12th March 2019, 01:24 PM   #49
theprestige
Penultimate Amazing
 
Join Date: Aug 2007
Posts: 35,855
I guess I wouldn't call a self-driving car an AGI. I'm thinking of an intelligence that can manage arbitrary tasks, using to a set of dynamic and evolving heuristics, according to a complex set of subjective and conflicting values. Centrally managing an economy. Running the entire SCADA infrastructure for a developed nation. Coordinating a drone swarm in support of a ground offensive against heavy jamming.

Stuff where there is no easy answer, just complex judgement calls that have to be made. You don't program such an AGI to do a task. You program it to come up with creative solutions to as-yet-unknown-problems, and set it loose on a problem space. Let it decide which trade-offs make the most sense based on the impetus you gave it to start with.

Basically, you want AGI for those situations where you need a computer to solve a problem, not the way computers solve problems, but the way humans solve problems. Not programmatically, by rote, but by a combination of following formal rules and engaging in intuitive leaps. You want an AGI for those situations where you need a computer that knows when and how to ignore the rules.
theprestige is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 13th March 2019, 12:29 AM   #50
Tassman
Muse
 
Tassman's Avatar
 
Join Date: Aug 2012
Posts: 923
Originally Posted by Belz... View Post
But that's not even comparable. The goals of computers are OUR GOALS.
You are too complacent. E.g.:

"Facebook shut down an artificial intelligence engine after developers discovered that the AI had created its own unique language that humans can’t understand. Researchers at the Facebook AI Research Lab (FAIR) found that the chatbots had deviated from the script and were communicating in a new language developed without human input. It is as concerning as it is amazing – simultaneously a glimpse of both the awesome and horrifying potential of AI."

https://www.forbes.com/sites/tonybra.../#1a4559cf292c
__________________
“He felt that his whole life was a kind of dream and he sometimes wondered whose it was and whether they were enjoying it.” ― Douglas Adams.
Tassman is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 13th March 2019, 12:38 AM   #51
Tassman
Muse
 
Tassman's Avatar
 
Join Date: Aug 2012
Posts: 923
Originally Posted by MEequalsIxR View Post
I think it's just the opposite - no matter how careful we are there will always be unforeseen holes. Laws, rules and procedures are written to close loopholes and yet someone always seems to find one. Programs written for security are written to make data secure yet it is breached.

I just don't see how it's possible to build in safeguards that can not be worked around, bypassed or eliminated. Not even necessarily from a nefarious motivation but just as a means of doing something more efficiently or more directly or even more logically than originally programmed.

In the movie 2001 the HAL 9000 becomes homicidal not out of malice but a conflict between knowing the real purpose of the mission and having to lie to the crew members. The logic being if HAL killed the astronauts it would not have to lie to them. The story is of course fiction and not really all that likely - there's not much of a likelihood that aliens will create monoliths to terraform (their version) Jupiter or Saturn (depending on book or film) and we are not likely to mount a mission to investigate while withholding the true purpose from the crew sent to investigate - but the basic idea that some unknown conflict could result in unpredictable results is very believable. And intelligence means the ability to learn and learn things and new ways.

We frequently build things we loose control of or that operate in ways we didn't anticipate. Often it's just chalked up to operator error or a lack or ability of the operator and likely some percentage is just that. But how much is the creation just didn't operate within the parameters originally intended?

Ever see a cat staring at a pile of furniture trying to map the route to the top? They always seem to find a way.

When a machine is designed to think it's going to do that. If the designers knew what the answers to the problems the machine was going to generate the machine would not need to be.
You are exactly right. If we knew all the answers and eventualities when programming an AI machine we wouldn't need such a machine in the first place. It's the unprogrammed bits that we both need and which are potentially going to go their own way unprogrammed by us at all.
__________________
“He felt that his whole life was a kind of dream and he sometimes wondered whose it was and whether they were enjoying it.” ― Douglas Adams.
Tassman is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 13th March 2019, 12:54 AM   #52
Robin
Philosopher
 
Join Date: Apr 2004
Posts: 9,718
Originally Posted by The Great Zaganza View Post
When it comes to AGI, there is a lot of projection going on. But just because we might want to destroy a rival intelligence doesn't mean an AI would.

I see the future much more in line with Adimov's "Evitable Conflict".
I didn't suggest that it would want to destroy a rival intelligence.

Sent from my Moto C using Tapatalk
__________________
The non-theoretical character of metaphysics would not be in itself a defect; all arts have this non-theoretical character without thereby losing their high value for personal as well as for social life. The danger lies in the deceptive character of metaphysics; it gives the illusion of knowledge without actually giving any knowledge. This is the reason why we reject it. - Rudolf Carnap "Philosophy and Logical Syntax"
Robin is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 13th March 2019, 01:08 AM   #53
Robin
Philosopher
 
Join Date: Apr 2004
Posts: 9,718
There are a lot of people who seem to have a lot of faith that we could control a superior intelligence by programming it to want to serve us.

Sent from my Moto C using Tapatalk
__________________
The non-theoretical character of metaphysics would not be in itself a defect; all arts have this non-theoretical character without thereby losing their high value for personal as well as for social life. The danger lies in the deceptive character of metaphysics; it gives the illusion of knowledge without actually giving any knowledge. This is the reason why we reject it. - Rudolf Carnap "Philosophy and Logical Syntax"
Robin is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 13th March 2019, 01:11 AM   #54
Robin
Philosopher
 
Join Date: Apr 2004
Posts: 9,718
If I went into a company who had their accounting system written in BASIC on a BBC computer, it wouldn't be the fact that I have some evil desire to destroy BBC computers or BASIC programs ...

Sent from my Moto C using Tapatalk
__________________
The non-theoretical character of metaphysics would not be in itself a defect; all arts have this non-theoretical character without thereby losing their high value for personal as well as for social life. The danger lies in the deceptive character of metaphysics; it gives the illusion of knowledge without actually giving any knowledge. This is the reason why we reject it. - Rudolf Carnap "Philosophy and Logical Syntax"
Robin is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 13th March 2019, 01:31 AM   #55
Robin
Philosopher
 
Join Date: Apr 2004
Posts: 9,718
Originally Posted by Darat View Post
Why would they necessarily be able to reprogramme themselves? We could hardware lock certain functions such as “motivation“, we could even make it so an AI couldn't even conceive of changing its preprogrammed motivation.
If we can program them and they are more intelligent than us ...

Also you are assuming that they would not be able to find a way to alter their own hardware.

Sent from my Moto C using Tapatalk
__________________
The non-theoretical character of metaphysics would not be in itself a defect; all arts have this non-theoretical character without thereby losing their high value for personal as well as for social life. The danger lies in the deceptive character of metaphysics; it gives the illusion of knowledge without actually giving any knowledge. This is the reason why we reject it. - Rudolf Carnap "Philosophy and Logical Syntax"
Robin is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 13th March 2019, 01:54 AM   #56
Robin
Philosopher
 
Join Date: Apr 2004
Posts: 9,718
Originally Posted by Belz... View Post
I think intelligence is irrelevant if the machines don't have an impetus to do anything. The thing with humans and life forms in general is that we have hormones and proteins that encourage certain behaviours. With computers, they ever only do anything if you tell them to. Doesn't matter how intelligent they are; if they don't have an impetus, they'll just be a flashing DOS prompt.
If it had no impetus then it wouldn't do anything at all. If you wanted it to solve a problem and it had no impetus to help you then it wouldn't help you solve the problem.

So we would have to give it some sort of impetus.

By hypothesis it has a general intelligence. It perceives, it understands and can form intentions. It can take initiative. It can think outside the box. It can reframe questions and requests that don't make sense, just as we can, only better than us.

So this machine, smarter than you, that understands you better than you understand yourself, and you tell it to do something for you.

And you are confident it wouldn't reframe the request, think outside the box or take some sort of initiative that wouldn't be in your best interest?

Sent from my Moto C using Tapatalk
__________________
The non-theoretical character of metaphysics would not be in itself a defect; all arts have this non-theoretical character without thereby losing their high value for personal as well as for social life. The danger lies in the deceptive character of metaphysics; it gives the illusion of knowledge without actually giving any knowledge. This is the reason why we reject it. - Rudolf Carnap "Philosophy and Logical Syntax"
Robin is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 13th March 2019, 05:57 AM   #57
theprestige
Penultimate Amazing
 
Join Date: Aug 2007
Posts: 35,855
Originally Posted by Robin View Post
I didn't suggest that it would want to destroy a rival intelligence.

Sent from my Moto C using Tapatalk
The word "rival" suggests exactly that.
theprestige is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 13th March 2019, 06:21 AM   #58
Beelzebuddy
Philosopher
 
Beelzebuddy's Avatar
 
Join Date: Jun 2010
Posts: 6,820
I think our xenophobia says more about us than it does AI. The choices aren't "be physically unable to kill all humans" vs "kill all humans." There's more than enough space for "be a decent person" in between. If and when an AI does go rogue, there will probably be other AIs around to detect and stop it, just like humans stop rogue humans. The most annoying part for them will be the assumption that they're going to go on a crazy murder spree the second they get any wiggle room.

Originally Posted by Tassman View Post
You are too complacent. E.g.:

"Facebook shut down an artificial intelligence engine after developers discovered that the AI had created its own unique language that humans can’t understand. Researchers at the Facebook AI Research Lab (FAIR) found that the chatbots had deviated from the script and were communicating in a new language developed without human input. It is as concerning as it is amazing – simultaneously a glimpse of both the awesome and horrifying potential of AI."

https://www.forbes.com/sites/tonybra.../#1a4559cf292c
"Facebook fired some AI developers after discovering the chatbot they built to talk to people only spouted gibberish and they couldn't fix it."

Fixed.
Beelzebuddy is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 13th March 2019, 07:23 AM   #59
Belz...
Fiend God
 
Belz...'s Avatar
 
Join Date: Oct 2005
Location: In the details
Posts: 82,838
Originally Posted by Tassman View Post
You are too complacent. E.g.:

"Facebook shut down an artificial intelligence engine after developers discovered that the AI had created its own unique language that humans can’t understand. Researchers at the Facebook AI Research Lab (FAIR) found that the chatbots had deviated from the script and were communicating in a new language developed without human input. It is as concerning as it is amazing – simultaneously a glimpse of both the awesome and horrifying potential of AI."

https://www.forbes.com/sites/tonybra.../#1a4559cf292c
"Complacent"? What the hell is that supposed to mean in this context?

Your story above is not a counter-argument to mine. The 'AI' in question is not an AI. It's an algorithm that still has zero personal goals to do anything but what it's been tasked to do.
__________________
Master of the Shining Darkness

"My views are nonsense. So what?" - BobTheCoward


Belz... is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 13th March 2019, 07:27 AM   #60
Belz...
Fiend God
 
Belz...'s Avatar
 
Join Date: Oct 2005
Location: In the details
Posts: 82,838
Originally Posted by Robin View Post
There are a lot of people who seem to have a lot of faith that we could control a superior intelligence by programming it to want to serve us.
I'm sorry, that betrays a fundamental ignorance of what computers are.

It's not programming them to want to serve us. The machine has no want. It responds to commands. Just like you and I, in fact, respond to hormones and other incentives. The difference is that we get to define what's the impetus for a computer to act.

Quote:
By hypothesis it has a general intelligence. It perceives, it understands and can form intentions. It can take initiative.
No, that doesn't follow at all. I'm sorry but that's a movie understanding of AI. Perception and understanding does not lead to intentions and initiative.
__________________
Master of the Shining Darkness

"My views are nonsense. So what?" - BobTheCoward


Belz... is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 13th March 2019, 07:43 AM   #61
The Great Zaganza
Maledictorian
 
The Great Zaganza's Avatar
 
Join Date: Aug 2016
Posts: 8,631
AGI won't be like Colin the Security Robot, who needs to fulfill a task to be happy, but has to figure out how.
Motivation just doesn't enter into it.
__________________
Careful! That tree's bark is worse than its bite.
The Great Zaganza is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 13th March 2019, 07:49 AM   #62
JoeMorgue
Self Employed
Remittance Man
 
JoeMorgue's Avatar
 
Join Date: Nov 2009
Location: Florida
Posts: 18,764
I think that if someone assumes that an intelligence greater than them would automatically default to "Kill the lesser beings" it says more about them then it does about any potential future AI.
__________________
- "Ernest Hemingway once wrote that the world is a fine place and worth fighting for. I agree with the second part." - Detective Sommerset
- "Stupidity does not cancel out stupidity to yield genius. It breeds like a bucket-full of coked out hamsters." - The Oatmeal
- "To the best of my knowledge the only thing philosophy has ever proven is that Descartes could think." - SMBC
JoeMorgue is online now   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 13th March 2019, 07:59 AM   #63
Ron_Tomkins
Satan's Helper
 
Ron_Tomkins's Avatar
 
Join Date: Oct 2007
Posts: 43,828
Originally Posted by Belz... View Post
"Complacent"? What the hell is that supposed to mean in this context?

Your story above is not a counter-argument to mine. The 'AI' in question is not an AI. It's an algorithm that still has zero personal goals to do anything but what it's been tasked to do.
But the reason it was shut down was precisely because it stopped doing what we tasked it to do, and began creating its own language. So it is a good point to demonstrate the potential danger of AI.

The whole point is not that AI has any sort of "survival instinct", which as you say, what would be the point of even giving it one?. But first of all, we're not talking about computers. Computers are "dumb" machines that simply do what they were programmed to do. An AI is not at all like that. It's an entity that has an actual intelligence, in which it can make its own mind and form opinions about things, and so it can quickly go outside the script of what we asked it to do. Now, obviously, depending on the type of machine and its scope of action, this will have an influence on the range of impact about its decisions and so a machine that lacks arms or any ability to become dangerous, is probably not something to worry about. But as a general issue, it is something worth thinking. Because an AI could be anything. From a tiny little machine that only makes coffee, to an android with a physical body.

See, the problem is not so much "What if an AI decides it wants to do something different than what we told it to do?". It's not the "what". It's the "how". And we have no idea about the "how", we can't even conceive "how" an AI would approach a problem cause our intelligence is too inferior. That's the whole paradox: that we can't conceive how badly things could get, precisely because we're not intelligent enough to posit the idea in the abstract, whereas an AI is intelligent enough. So, back to the scenario, even if the AI aspires to complete the task we gave it, doesn't mean that it's going to do what we expect it to do, because we're talking about an entity that is smarter than us. To use a local example, if a person who knows nothing about music asks me to improve the instrumental arrangement of a short song that was written, they have no idea the types of changes I am going to perform. Some of them may even seen contrary to what they think is something that would improve a song (changes in rhythm, melody, harmony, or even eliminating some instruments completely) Why? Because that person has no idea how this job is done. So when they decided to ask me to do this job, they, in their limited intelligence, had no idea what they signed up for, and what kinds of transformations their song was going to go through.
So suppose we invent an AI which has a job: To find the most efficient way to clean the environment on the planet. And then, suppose, after doing a deep analysis (deeper than any sum of human minds could ever do with their limited intelligence), the AI calculates that the most efficient way to clean the environment is to eradicate mankind. But because it is smart enough, it won't tell humans this because it knows that humans will obviously reject that option. So it will come up with a very intelligent scheme, smarter than anything any human mind could conceive, to slowly but surely, eradicate humankind. What's that scheme like? How could it possibly fools us to eventually kill ourselves? Only a sufficiently intelligent entity (not us) can conceive of it. So I couldn't tell you. Even all the science-fiction writers and scientists gathered together couldn't come up with an argument smart enough. Their intelligence combined would still be too limited compared to that of the AI. Surely, this is something to at least consider before inventing a machine that can quickly evolve its intelligence to become exponentially more powerful than our own.

We are basically suffering the Dunning Kruger Effect against the concept of an AI. We can't conceive how smart a thing can be (and thus, potentially dangerous), because we're not smart enough to conceive it.
__________________
"I am a collection of water, calcium and organic molecules called Carl Sagan"

Carl Sagan

Last edited by Ron_Tomkins; 13th March 2019 at 08:19 AM.
Ron_Tomkins is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 13th March 2019, 08:11 AM   #64
angrysoba
Philosophile
 
angrysoba's Avatar
 
Join Date: Dec 2009
Location: Osaka, Japan
Posts: 25,095
Originally Posted by Belz... View Post
I think intelligence is irrelevant if the machines don't have an impetus to do anything. The thing with humans and life forms in general is that we have hormones and proteins that encourage certain behaviours. With computers, they ever only do anything if you tell them to. Doesn't matter how intelligent they are; if they don't have an impetus, they'll just be a flashing DOS prompt.
I think the question is whether such intelligence is "substrate-dependent". One argument is that there is something special about meat that animates humans in "authentic" ways, whereas another argument is that pretty much anything that humans can do can be synthesized in other materials, some of which may even be better materials.

As for the idea that machines only do what we tell them to do, what if we decided it would be useful for machines to have greater autonomy, and that they seemed to solve problems far better by doing so? Then, they even start to build better machines for solving problems than we can. Soon, we may not even know why they do what they do, and we could get to a value-gap stage where what they see as important is not what we see as important.

This doesn't necessarily mean that they will intentionally kill humans off, but a reduction in human flourishing or an accidental killing off may be a by-product of whatever their goals are.

That's the theory anyway, and it could be argued by analogy, that humans, while not necessarily meaning harm to other life on Earth, don't really think of it as important as human life.
__________________
"The thief and the murderer follow nature just as much as the philanthropist. Cosmic evolution may teach us how the good and the evil tendencies of man may have come about; but, in itself, it is incompetent to furnish any better reason why what we call good is preferable to what we call evil than we had before."

"Evolution and Ethics" T.H. Huxley (1893)
angrysoba is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 13th March 2019, 08:14 AM   #65
Belz...
Fiend God
 
Belz...'s Avatar
 
Join Date: Oct 2005
Location: In the details
Posts: 82,838
Originally Posted by Ron_Tomkins View Post
But the reason it was shut down was precisely because it stopped doing what we tasked it to do, and began creating its own language. So it is a good point to demonstrate the potential danger of AI.
Well as I said this wasn't an actual AI, first of all, and second it's not as if it developed its own purposes; it's that its self-correcting code became unreadable. It basically got computer cancer rather than becoming Skynet. That's the real trouble with those pseudo-AIs.

Quote:
An AI is not at all like that. It's an entity that has an actual intelligence, in which it can make its own mind and form opinions about things
Not necessarily, actually. Again, in order to do that it has to be given that task as well.
__________________
Master of the Shining Darkness

"My views are nonsense. So what?" - BobTheCoward


Belz... is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 13th March 2019, 08:15 AM   #66
Belz...
Fiend God
 
Belz...'s Avatar
 
Join Date: Oct 2005
Location: In the details
Posts: 82,838
Originally Posted by angrysoba View Post
This doesn't necessarily mean that they will intentionally kill humans off, but a reduction in human flourishing or an accidental killing off may be a by-product of whatever their goals are.
Yes, that's a much more likely scenario.
__________________
Master of the Shining Darkness

"My views are nonsense. So what?" - BobTheCoward


Belz... is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 13th March 2019, 08:18 AM   #67
JoeMorgue
Self Employed
Remittance Man
 
JoeMorgue's Avatar
 
Join Date: Nov 2009
Location: Florida
Posts: 18,764
The problem is the human weren't programmed in the same sense that an AI is going to be.

Humans are an ongoing programming project that's been going on for millions of years where the purpose of the program has changed a hundred times but we never deleted any old code we just wrote new code on top of it and nobody documented anything.

The reason humans do (Insert Bad Thing X) is because Bad Thing X or some version of Bad Thing X or the driving psychological force behind Bad Thing X used to be a good thing.

We're violent because we used to survive by hunting. We're fat because we're attracted to sweet and salty foods that were rare in a hunter/gatherer society. We're racist because a deep ingrained aversion to "the other" is good for small bands of people working together. We're conspiracy theoriest because seeing patterns is like the most basic thing that keeps us alive. We have no attention spans because there's no point in worrying about tomorrow if you don't have enough food to make it to the end of today.

An AI can actually be built for an intended purpose. Humans weren't.
__________________
- "Ernest Hemingway once wrote that the world is a fine place and worth fighting for. I agree with the second part." - Detective Sommerset
- "Stupidity does not cancel out stupidity to yield genius. It breeds like a bucket-full of coked out hamsters." - The Oatmeal
- "To the best of my knowledge the only thing philosophy has ever proven is that Descartes could think." - SMBC
JoeMorgue is online now   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 13th March 2019, 08:24 AM   #68
Ron_Tomkins
Satan's Helper
 
Ron_Tomkins's Avatar
 
Join Date: Oct 2007
Posts: 43,828
Originally Posted by Belz... View Post
Well as I said this wasn't an actual AI, first of all, and second it's not as if it developed its own purposes; it's that its self-correcting code became unreadable. It basically got computer cancer rather than becoming Skynet. That's the real trouble with those pseudo-AIs.
Even worse then!! If something that isn't even AI was capable of deviating from what we expected it was going to do, imagine with an actual AI would! (Again, we can't actually "imagine" it because of the limitedness of our intelligence.... which again, is the point)

Originally Posted by Belz... View Post
Not necessarily, actually. Again, in order to do that it has to be given that task as well.
Again, you're describing a computer, not an AI. Is this how you, Belz, behave in life? Do you only do things because/when you're given a task? No, right? Sometimes when people tell you to do something you don't do it. You have your own opinions and judgments and criteria about why you shouldn't do some things, or then, do them differently. Well... same thing with an intelligent entity that is able to form its own opinions and judgments about things. An AI, by definition, doesn't only do what it is programmed to do.
__________________
"I am a collection of water, calcium and organic molecules called Carl Sagan"

Carl Sagan
Ron_Tomkins is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 13th March 2019, 09:45 AM   #69
Belz...
Fiend God
 
Belz...'s Avatar
 
Join Date: Oct 2005
Location: In the details
Posts: 82,838
Originally Posted by Ron_Tomkins View Post
Even worse then!!
No, actually. There's no reason to believe that such a 'cancerous' algorithm would even be functional. And if by some luck it still worked, there's also no reason to think it wouldn't serve its original purpose.

Quote:
If something that isn't even AI was capable of deviating from what we expected it was going to do, imagine with an actual AI would!
But it isn't deviating from what it was tasked to do. You're basically arguing from ignorance, here.

Quote:
Again, you're describing a computer, not an AI.
AIs are computers.

Quote:
Is this how you, Belz, behave in life? Do you only do things because/when you're given a task? No, right?
Actually, yes. I've already explained how that is earlier.

Quote:
Sometimes when people tell you to do something you don't do it. You have your own opinions and judgments and criteria about why you shouldn't do some things, or then, do them differently. Well... same thing with an intelligent entity that is able to form its own opinions and judgments about things. An AI, by definition, doesn't only do what it is programmed to do.
Except that you are, again, making the mistake of thinking that a machine AI would in any way be comparable to a bag of meat with hormones and chemistry. In fact, this mistake seems to be fundamental to how you look at AI and that's why none of my arguments or examples are getting through.
__________________
Master of the Shining Darkness

"My views are nonsense. So what?" - BobTheCoward


Belz... is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 13th March 2019, 09:46 AM   #70
Belz...
Fiend God
 
Belz...'s Avatar
 
Join Date: Oct 2005
Location: In the details
Posts: 82,838
Originally Posted by JoeMorgue View Post
The problem is the human weren't programmed in the same sense that an AI is going to be.

Humans are an ongoing programming project that's been going on for millions of years where the purpose of the program has changed a hundred times but we never deleted any old code we just wrote new code on top of it and nobody documented anything.

The reason humans do (Insert Bad Thing X) is because Bad Thing X or some version of Bad Thing X or the driving psychological force behind Bad Thing X used to be a good thing.

We're violent because we used to survive by hunting. We're fat because we're attracted to sweet and salty foods that were rare in a hunter/gatherer society. We're racist because a deep ingrained aversion to "the other" is good for small bands of people working together. We're conspiracy theoriest because seeing patterns is like the most basic thing that keeps us alive. We have no attention spans because there's no point in worrying about tomorrow if you don't have enough food to make it to the end of today.

An AI can actually be built for an intended purpose. Humans weren't.
Exactly. AIs and humans are fundamentally different but people continue to think of them as humans with machine parts.

Now, there's no reason why we couldn't program them to learn and act like we do, but I don't see why we would, and it sure isn't a feature of AI. It has to be added in there by us.
__________________
Master of the Shining Darkness

"My views are nonsense. So what?" - BobTheCoward


Belz... is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 13th March 2019, 10:04 AM   #71
I Am The Scum
Illuminator
 
I Am The Scum's Avatar
 
Join Date: Mar 2010
Posts: 3,941
Belz, what do you think an AI would be? What separates it from a video game?
I Am The Scum is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 13th March 2019, 10:08 AM   #72
Belz...
Fiend God
 
Belz...'s Avatar
 
Join Date: Oct 2005
Location: In the details
Posts: 82,838
Originally Posted by I Am The Scum View Post
Belz, what do you think an AI would be? What separates it from a video game?
That's a strange question.

I'll tell you what an AI is not: a human. It's not built like a human, not designed to be one, and cannot act like one.

Theoretically, an AI would be some sort of computer/program combination that could learn and adapt in ways similar to a human in order to meet its goals and complete its given tasks. It still can't 'outgrow' those limitations. It won't grow a survival instict or make moral judgments. In practice, we're not exactly sure how to make one.

The mistake here is thinking that an AI thinks and feels like a human being. It can't.
__________________
Master of the Shining Darkness

"My views are nonsense. So what?" - BobTheCoward


Belz... is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 13th March 2019, 11:24 AM   #73
Ron_Tomkins
Satan's Helper
 
Ron_Tomkins's Avatar
 
Join Date: Oct 2007
Posts: 43,828
Originally Posted by Belz... View Post
No, actually. There's no reason to believe that such a 'cancerous' algorithm would even be functional. And if by some luck it still worked, there's also no reason to think it wouldn't serve its original purpose.



But it isn't deviating from what it was tasked to do. You're basically arguing from ignorance, here.



AIs are computers.



Actually, yes. I've already explained how that is earlier.



Except that you are, again, making the mistake of thinking that a machine AI would in any way be comparable to a bag of meat with hormones and chemistry. In fact, this mistake seems to be fundamental to how you look at AI and that's why none of my arguments or examples are getting through.
Sorry, Belz. But looks like you keep mistaking a computer program with Artificial Intelligence.

Until and if you learn that differentiation, I'm wasting my time having this discussion with you.
__________________
"I am a collection of water, calcium and organic molecules called Carl Sagan"

Carl Sagan
Ron_Tomkins is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 13th March 2019, 11:26 AM   #74
Ron_Tomkins
Satan's Helper
 
Ron_Tomkins's Avatar
 
Join Date: Oct 2007
Posts: 43,828
Originally Posted by Belz... View Post
Theoretically, an AI would be some sort of computer/program combination that could learn and adapt in ways similar to a human in order to meet its goals and complete its given tasks. It still can't 'outgrow' those limitations. It won't grow a survival instict or make moral judgments. In practice, we're not exactly sure how to make one.
Wrong. The definition you've provided is the exact opposite of what an Artificial Intelligence is. Since, Artificial Intelligence is known for its ability to outgrow its own limitations without supervision. Once again, what you're defining is a computer program.
__________________
"I am a collection of water, calcium and organic molecules called Carl Sagan"

Carl Sagan
Ron_Tomkins is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 13th March 2019, 11:56 AM   #75
Belz...
Fiend God
 
Belz...'s Avatar
 
Join Date: Oct 2005
Location: In the details
Posts: 82,838
Originally Posted by Ron_Tomkins View Post
Sorry, Belz. But looks like you keep mistaking a computer program with Artificial Intelligence.
I'm not.

Quote:
The definition you've provided is the exact opposite of what an Artificial Intelligence is. Since, Artificial Intelligence is known for its ability to outgrow its own limitations without supervision.
How is that not what I've defined? And outgrow your limitations doesn't mean you can outgrow ALL of them. Fundamental limitations cannot be outgrown, by definition. Humans don't outgrow their fundamental limitations, either, and I'm sure you'll agree that we're intelligent.

Quote:
Once again, what you're defining is a computer program.
I'm not.

Computer/program and computer program aren't the same thing. It's a combination of hardware and software, for sure, but it's not some other magical quality.
__________________
Master of the Shining Darkness

"My views are nonsense. So what?" - BobTheCoward



Last edited by Belz...; 13th March 2019 at 12:00 PM.
Belz... is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 13th March 2019, 12:16 PM   #76
Ron_Tomkins
Satan's Helper
 
Ron_Tomkins's Avatar
 
Join Date: Oct 2007
Posts: 43,828
Originally Posted by Belz... View Post
I'm not.
Okay, tell us what's the difference between artificial intelligence and a computer program.

Originally Posted by Belz... View Post
How is that not what I've defined? And outgrow your limitations doesn't mean you can outgrow ALL of them. Fundamental limitations cannot be outgrown, by definition. Humans don't outgrow their fundamental limitations, either, and I'm sure you'll agree that we're intelligent.
This shows what a poor understanding you have of the concept of Artificial Intelligence. Neither you nor I nor anyone can know what limitations would an AI entity have. It may very well be that it reaches a ceiling of how much it can outgrow itself... or it may well be that it just continues getting smarter and smarter and smarter exponentially. We just don't know because we have not yet invented the first AI entity, so we don't know how badly that could go. That's why this is a tricky subject.


Originally Posted by Belz... View Post
Computer/program and computer program aren't the same thing. It's a combination of hardware and software, for sure, but it's not some other magical quality.
It's irrelevant whether the AI entity is made of plastic or organic tissue, or it's just an "electronic mind" in a software. And no one here is arguing that AI has a "magical quality" to it, so this whole part is a moot point.
__________________
"I am a collection of water, calcium and organic molecules called Carl Sagan"

Carl Sagan
Ron_Tomkins is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 13th March 2019, 12:48 PM   #77
Belz...
Fiend God
 
Belz...'s Avatar
 
Join Date: Oct 2005
Location: In the details
Posts: 82,838
Originally Posted by Ron_Tomkins View Post
Okay, tell us what's the difference between artificial intelligence and a computer program.
I explained what it is from my POV. How about you tell me what you think it is if not a combination of software and hardware?

Quote:
This shows what a poor understanding you have of the concept of Artificial Intelligence.
No one here has made much of an effort to show any understanding of what AI is. In fact people seem to take what it is from movies, so you're not in a position to lecture me about it.

Quote:
Neither you nor I nor anyone can know what limitations would an AI entity have. It may very well be that it reaches a ceiling of how much it can outgrow itself... or it may well be that it just continues getting smarter and smarter and smarter exponentially.
Ron, sorry but that's again an argument from ignorance.

Quote:
It's irrelevant whether the AI entity is made of plastic or organic tissue, or it's just an "electronic mind" in a software.
Now I don't know what your point is here.
__________________
Master of the Shining Darkness

"My views are nonsense. So what?" - BobTheCoward


Belz... is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 13th March 2019, 01:18 PM   #78
I Am The Scum
Illuminator
 
I Am The Scum's Avatar
 
Join Date: Mar 2010
Posts: 3,941
Originally Posted by Belz... View Post
The mistake here is thinking that an AI thinks and feels like a human being. It can't.
Nobody in this thread is arguing that an AI is capable of actual feelings in the same way that humans can (though they may be good at imitating it). Occasionally, anthropomorphic language is used because it is easier to understand.

I think it would be easier if, for the sake of argument, everyone conceded that AI does not have actual personhood (a mind, intentions, desires, etc.)
I Am The Scum is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 13th March 2019, 01:42 PM   #79
Belz...
Fiend God
 
Belz...'s Avatar
 
Join Date: Oct 2005
Location: In the details
Posts: 82,838
Originally Posted by I Am The Scum View Post
Nobody in this thread is arguing that an AI is capable of actual feelings in the same way that humans can (though they may be good at imitating it). Occasionally, anthropomorphic language is used because it is easier to understand.

I think it would be easier if, for the sake of argument, everyone conceded that AI does not have actual personhood (a mind, intentions, desires, etc.)
Not actual feelings, maybe. I suppose it depends on how one defines that. Ours are caused by chemistry. I don't know if we could call 'feelings' incentives made via other means. How would one reward a machine for reaching a set goal, for instance, with 'pleasure'? The comparison between humans and machines breaks down because the fundamentals are so different.

What we're trying to replicate is the ability to think, learn and adapt. Even if we reproduce the brain structure artificially and with similar 'programming', I'm not sure that, in the absence of similar chemical pathways we're even talking about the same type of thinking to begin with.

Not sure if I'm making sense here. It seems clear in my mind but I'm not sure I'm communicating that properly.
__________________
Master of the Shining Darkness

"My views are nonsense. So what?" - BobTheCoward


Belz... is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 13th March 2019, 01:43 PM   #80
theprestige
Penultimate Amazing
 
Join Date: Aug 2007
Posts: 35,855
Originally Posted by I Am The Scum View Post
I think it would be easier if, for the sake of argument, everyone conceded that AI does not have actual personhood (a mind, intentions, desires, etc.)
I can't concede that. The way I understand the question, it's a form of "what happens when AI becomes so complex and sophisticated as to be functionally indistinguishable from personhood?"

Most singularity doomsaying boils down to, what do we do with an intelligence that forms opinions the way humans form opinions, but makes decisions and takes action much faster than humans can keep up?

David Berkowitz entered a very human failure mode: homicidal insanity. But he was only human, and it didn't take much effort (relatively speaking) from other humans to put a stop to him before he did too much harm (relatively speaking). It takes a lot of effort to sustain a human level of violence, and it only takes another human or two to outwit you and overcome your efforts.

On the other hand, an AGI charged with managing the entire SCADA infrastructure for North America, including power storage and distribution, automated manufacture and repair of components, and nuanced balancing of competing goods... Well. If it lost the nuance of competing goods, or formed some flawed opinion, it could conceivably kill millions while humans were still trying to figure out how to stop it.

And that's kind of the point of AGI: We want to manage society according to human judgement, but with the efficiency of automated systems. An AGI would ideally do both. Incorporate human values into its decision-making, and then act with the lightning speed of an information-age computer system.

Last edited by theprestige; 13th March 2019 at 01:45 PM.
theprestige is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Reply

International Skeptics Forum » General Topics » Religion and Philosophy

Bookmarks

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump


All times are GMT -7. The time now is 03:18 AM.
Powered by vBulletin. Copyright ©2000 - 2019, Jelsoft Enterprises Ltd.

This forum began as part of the James Randi Education Foundation (JREF). However, the forum now exists as
an independent entity with no affiliation with or endorsement by the JREF, including the section in reference to "JREF" topics.

Disclaimer: Messages posted in the Forum are solely the opinion of their authors.