IS Forum
Forum Index Register Members List Events Mark Forums Read Help

Go Back   International Skeptics Forum » General Topics » Religion and Philosophy
 


Welcome to the International Skeptics Forum, where we discuss skepticism, critical thinking, the paranormal and science in a friendly but lively way. You are currently viewing the forum as a guest, which means you are missing out on discussing matters that are of interest to you. Please consider registering so you can gain full use of the forum features and interact with other Members. Registration is simple, fast and free! Click here to register today.
Reply
Old 13th December 2020, 08:39 AM   #1
Beerina
Sarcastic Conqueror of Notions
 
Beerina's Avatar
 
Join Date: Mar 2004
Posts: 30,357
No, we will not have sentient robots any time soon

In a bizarre subject, some wonder if sexbots can consent.

This is not about simulating consent as part of some bizarre sex behavior, but concerns about immorality, given some roboticists thing robots will achieve "sentience within a few decades".

This is about the existance of a consciousness, an internal mind, and not an automaton, no matter how intelligent it may be.

And there is nothing remotely like that on the drawing table. Nobody even knows how the subjective conscious experience even arises from the matter and energies of our brains, much less how to duplicate it.

It is a real, physical phenomenon, and therefore arises somehow, and is not a function of mere interpretations of atoms and electrons moving around.
__________________
"Great innovations should not be forced [by way of] slender majorities." - Thomas Jefferson

The government should nationalize it! Socialized, single-payer video game development and sales now! More, cheaper, better games, right? Right?
Beerina is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 13th December 2020, 08:45 AM   #2
The Great Zaganza
Maledictorian
 
The Great Zaganza's Avatar
 
Join Date: Aug 2016
Posts: 14,686
Not sure if the (in-)ability to consent is the only moral issue here.
__________________
So what are you going to do about it, huh?
What would an intellectual do?
What would Plato do?
The Great Zaganza is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 13th December 2020, 08:53 AM   #3
dann
Penultimate Amazing
 
Join Date: Feb 2004
Posts: 13,617
They have probably watched Humans (Wikipedia) and assumed that it was a documentary.
__________________
/dann
"Stupidity renders itself invisible by assuming very large proportions. Completely unreasonable claims are irrefutable. Ni-en-leh pointed out that a philosopher might get into trouble by claiming that two times two makes five, but he does not risk much by claiming that two times two makes shoe polish." B. Brecht
"The abolition of religion as the illusory happiness of the people is required for their real happiness. The demand to give up the illusion about its condition is the demand to give up a condition which needs illusions." K. Marx
dann is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 13th December 2020, 11:35 AM   #4
theprestige
Penultimate Amazing
 
Join Date: Aug 2007
Location: Hong Kong
Posts: 49,693
No idea how soon, but meanwhile some food for thought:

I think we know enough about human psychology that a motivated individual could plausibly take an infant and condition it over time such that it would eventually consent to be a sex doll without reservations or obvious distress. Basically treat the child as an AI to be trained, rather than a human being to be raised.

But is this ethical?

Would it be more or less ethical to subject an actual AI to such operant conditioning? Would it be ethical to design an AI that's optimized for such conditioning? Would it be ethical to give it a body that's optimized for the kind of work it would be conditioned to do?

Say you build a sexbot. It's good at sex, terrible at everything else. Say you give it a brain that's conditioned to consent to sex. Indeed, it's conditioned to not even think of not consenting. It's entire sense of well-being and fulfillment is conditioned on doing what it was made to do. Would creating such an AI be ethical?

And here's what I consider to be an even bigger ethical question. Everything we know about the human psyche suggest that trying to condition humans in this way is harmful and inhumane. Even if a well-adjusted and happy Human Sexbot were possible, the harm to test subjects as we figured out how to make one would be inexcusable.

So.

Is it ethical even to try to create AI? Given that AIs approaching the level of human consciousness are at risk of going through an "I have no mouth but I must scream" phase, while we're still tinkering with the process and trying to get it right. Is the quest for sane and happy artificial minds enough to justify creating broken and tortured artificial minds in the process?

---

Douglas Adams gets away with a cow that's happy to be eaten, because it's a sci-fi comedy. Would it be ethical to try to actually breed such a thing?
theprestige is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 13th December 2020, 02:05 PM   #5
JoeMorgue
Self Employed
Remittance Man
 
JoeMorgue's Avatar
 
Join Date: Nov 2009
Location: Florida
Posts: 30,721
I sort of feel like "Made up philosophical conundrums that with no answer because they aren't real valid questions" aren't going to be high on the priority list on people waiting on sexbots.
__________________
Yahtzee: "You're doing that thing again where when asked a question you just discuss the philosophy of the question instead of answering the bloody question."
Gabriel: "Well yeah, you see..."
Yahtzee: "No. When you are asked a Yes or No question the first word out of your mouth needs to be Yes or No. Only after that have you earned the right to elaborate."
JoeMorgue is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 13th December 2020, 02:14 PM   #6
Guybrush Threepwood
Trainee Pirate
 
Guybrush Threepwood's Avatar
 
Join Date: Jun 2007
Location: An Uaimh
Posts: 3,076
Originally Posted by JoeMorgue View Post
I sort of feel like "Made up philosophical conundrums that with no answer because they aren't real valid questions" aren't going to be high on the priority list on people waiting on sexbots.
I don't see why they aren't valid questions.
We are going to try to create sapient AIs and will probably succeed, I have a nasty feeling that we will not reach a consensus that they actually are conscious until the last remnants of humanity have spent centuries slaving in their dilithium mines, and possibly not even then.
Guybrush Threepwood is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 13th December 2020, 02:43 PM   #7
theprestige
Penultimate Amazing
 
Join Date: Aug 2007
Location: Hong Kong
Posts: 49,693
Originally Posted by JoeMorgue View Post
I sort of feel like "Made up philosophical conundrums that with no answer because they aren't real valid questions" aren't going to be high on the priority list on people waiting on sexbots.
I don't think it's a philosophical conundrum at all. I think it very much is a valid question. If the goal is to create minds that can think and feel and have desires and suffer, then there's questions about how much suffering the creation process is going to cause in these proto-minds.

The question of whether to carry out psychological conditioning experiments on children is a valid question, which we have largely answered with a resounding, "no, we shouldn't do that".

There's probably a lot we could learn about the nature of the human condition, if we started separating twins at birth, raising one the "normal" way and raising the other according to some theory of how humans develop. But we don't do that, because we think it would be inhumane.

If we're serious about pursing true AI, then we should be serious about the the effects of AI experiments on the minds we're creating in the process.

ETA: I agree that some of the people waiting on sexbots are going to dismiss the ethical concerns as "made up philosophical conundrums that with no answer because they aren't real valid questions", because they're amoral sociopaths whose base urges are best served by such a pose. But this was your characterization, not theirs. Hopefully that was just for rhetorical purposes, and you're actually on our side of the ethical question rather than theirs.

Last edited by theprestige; 13th December 2020 at 02:45 PM.
theprestige is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 13th December 2020, 11:59 PM   #8
psionl0
Skeptical about skeptics
 
psionl0's Avatar
 
Join Date: Sep 2010
Location: 31°57'S 115°57'E
Posts: 17,017
Originally Posted by theprestige View Post
Is it ethical even to try to create AI? Given that AIs approaching the level of human consciousness are at risk of going through an "I have no mouth but I must scream" phase, while we're still tinkering with the process and trying to get it right. Is the quest for sane and happy artificial minds enough to justify creating broken and tortured artificial minds in the process?
An AI is not a living being - not even if its actions are indistinguishable to those of people with real feelings and emotions.

As long as it comes with an off button, it is just a machine.
__________________
"The process by which banks create money is so simple that the mind is repelled. Where something so important is involved, a deeper mystery seems only decent." - Galbraith, 1975
psionl0 is online now   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 14th December 2020, 12:04 AM   #9
The Great Zaganza
Maledictorian
 
The Great Zaganza's Avatar
 
Join Date: Aug 2016
Posts: 14,686
Originally Posted by psionl0 View Post
An AI is not a living being - not even if its actions are indistinguishable to those of people with real feelings and emotions.

As long as it comes with an off button, it is just a machine.
humans come with an off button, even if you have to hit it really hard.

It's the ON button that makes it a machine.
__________________
So what are you going to do about it, huh?
What would an intellectual do?
What would Plato do?
The Great Zaganza is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 14th December 2020, 12:10 AM   #10
psionl0
Skeptical about skeptics
 
psionl0's Avatar
 
Join Date: Sep 2010
Location: 31°57'S 115°57'E
Posts: 17,017
Originally Posted by The Great Zaganza View Post
It's the ON button that makes it a machine.
I thought that was implied but I am happy for the clarification.
__________________
"The process by which banks create money is so simple that the mind is repelled. Where something so important is involved, a deeper mystery seems only decent." - Galbraith, 1975
psionl0 is online now   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 14th December 2020, 08:16 AM   #11
Bikewer
Penultimate Amazing
 
Bikewer's Avatar
 
Join Date: Sep 2003
Location: St. Louis, Mo.
Posts: 12,888
Science fiction has dealt with the problems attendant to “true AI” or “sentient” AI for a long time... Asimov wrote his first robot stories back in the 30s.

Speculations have ranged from apocalyptic (The Forbin Project, Terminator) to acceptance (Gibson has AI being granted citizenship in one novel)

I would have thought that the prospect of sentient AI was a matter for the far future, but I recently watched one of Neil DeGrasse Tyson’s “Star Talk” segments with a cutting-edge IT guy, and this fellow maintained we were “on the cusp” as it were. Anywhere from months to a few years:
https://youtu.be/lljmkWEsH4w

(The guy does not think it’s a good idea, BTW....)

So, we might have to address such problems sooner than we think.
Bikewer is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 14th December 2020, 08:36 AM   #12
Meadmaker
Penultimate Amazing
 
Meadmaker's Avatar
 
Join Date: Apr 2004
Posts: 25,285
Originally Posted by Bikewer View Post
Science fiction has dealt with the problems attendant to “true AI” or “sentient” AI for a long time... Asimov wrote his first robot stories back in the 30s.

Speculations have ranged from apocalyptic (The Forbin Project, Terminator) to acceptance (Gibson has AI being granted citizenship in one novel)

I would have thought that the prospect of sentient AI was a matter for the far future, but I recently watched one of Neil DeGrasse Tyson’s “Star Talk” segments with a cutting-edge IT guy, and this fellow maintained we were “on the cusp” as it were. Anywhere from months to a few years:
https://youtu.be/lljmkWEsH4w

(The guy does not think it’s a good idea, BTW....)

So, we might have to address such problems sooner than we think.
We've been on the cusp of it for a long time.


I haven't seen anything in the literature to convince me that sentience or "general AI" is just around the corner. Logically, though, it will happen someday. Since I don't believe in the supernatural, I'm forced to conclude that there is nothing special about organic tissue that makes it the only thing that can result in sentience, consciousness, intelligence, or anything else that we normally think of as unique to human beings.
__________________
Yes, yes. I know you're right, but would it hurt you to actually provide some information?
Meadmaker is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 14th December 2020, 08:37 AM   #13
psionl0
Skeptical about skeptics
 
psionl0's Avatar
 
Join Date: Sep 2010
Location: 31°57'S 115°57'E
Posts: 17,017
Originally Posted by Bikewer View Post
I recently watched one of Neil DeGrasse Tyson’s “Star Talk” segments with a cutting-edge IT guy, and this fellow maintained we were “on the cusp” as it were.
On the "cusp" of what? AI that seems indistinguishable from sentient life or creating sentient machines?
__________________
"The process by which banks create money is so simple that the mind is repelled. Where something so important is involved, a deeper mystery seems only decent." - Galbraith, 1975
psionl0 is online now   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 14th December 2020, 08:55 AM   #14
theprestige
Penultimate Amazing
 
Join Date: Aug 2007
Location: Hong Kong
Posts: 49,693
Originally Posted by psionl0 View Post
An AI is not a living being - not even if its actions are indistinguishable to those of people with real feelings and emotions.

As long as it comes with an off button, it is just a machine.
If I say it's a p-zombie, but it appears to suffer identically to a human being, is it really a p-zombie?

I think we're still hundreds or thousands of years away from being able to try for the kind of AI we're (or at least I'm) talking about here. But I think sooner or later we are going to try. And when we do, I think my question is probably the most important ethical question.

Basically I think if you think it's unethical to cause suffering in animals, then I think you must also consider it unethical to cause suffering in AI that are capable of experiencing suffering. Or you have to come up with some argument for why "artificial" suffering doesn't count as suffering.

Or some argument for why a born machine like a cat or a human is ethically distinct from a switched machine of similar sentience.
theprestige is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 14th December 2020, 09:04 AM   #15
theprestige
Penultimate Amazing
 
Join Date: Aug 2007
Location: Hong Kong
Posts: 49,693
Originally Posted by Bikewer View Post
Science fiction has dealt with the problems attendant to “true AI” or “sentient” AI for a long time... Asimov wrote his first robot stories back in the 30s.

Speculations have ranged from apocalyptic (The Forbin Project, Terminator) to acceptance (Gibson has AI being granted citizenship in one novel)

I would have thought that the prospect of sentient AI was a matter for the far future, but I recently watched one of Neil DeGrasse Tyson’s “Star Talk” segments with a cutting-edge IT guy, and this fellow maintained we were “on the cusp” as it were. Anywhere from months to a few years:
https://youtu.be/lljmkWEsH4w

(The guy does not think it’s a good idea, BTW....)

So, we might have to address such problems sooner than we think.
I don't think any SF has dealt with the problem I've raised. Certainly none of the examples you've cited get anywhere near it.

Azimov's robot stories are essentially logic problems couched as murder mysteries, with predictable behavior in robots as the framing device. Questions about ethical treatment of robots are not really addressed. The implications of researching and developing positronic brains that are as complex and as sane as human brains are never acknowledged, let alone explored.

The apocalyptic fiction focuses on the risks of giving the keys to the kingdom to a mind that hews to a logical-utilitarian framework and lacks the full depth and nuance of human empathy and ethics.

Even Gibson never goes into the psychological fate of all the proto-Wintermutes and proto-Neuromancers that must have preceded the time period of the novel.

Does the "cutting edge IT guy" think it's a bad idea because we're likely to create insane or unhappy minds on the road to creating sane and happy ones? If not, then he's not addressing the problem I'm talking about either.
theprestige is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 14th December 2020, 09:32 AM   #16
Darat
Lackey
Administrator
 
Darat's Avatar
 
Join Date: Aug 2001
Location: South East, UK
Posts: 97,133
Originally Posted by theprestige View Post
I don't think any SF has dealt with the problem I've raised. Certainly none of the examples you've cited get anywhere near it.

Azimov's robot stories are essentially logic problems couched as murder mysteries, with predictable behavior in robots as the framing device. Questions about ethical treatment of robots are not really addressed. The implications of researching and developing positronic brains that are as complex and as sane as human brains are never acknowledged, let alone explored.

...snip...
Whilst I don't think he went into great depth all those areas were covered by Asimov, even in his early robot stories his protagonist in most of the stories is Susan Calvin whose title is "Chief robopsychologist", a role that explicitly deals entirely with the minds of the robots - not the maths. One of his recurring themes throughout his career was "what is a robot, what is a human?". In one of his last stories in the classic robot short stories features the two (entirely from memory so may be wrong!*) robots from the George series who at the end of the story are put on low power storage, they talk to each other in slow time over many years and realise that by the definitions of human they have been created to recognise they are the most human of all those they have met.


*ETA: Had to go and look it up - they were Georges :

Quote:
https://en.wikipedia.org/wiki/Three_...of%20Him%22%2C

...snip....

"—That Thou Art Mindful of Him", which Asimov intended to be the "ultimate" probe into the Laws' subtleties,[40] finally uses the Three Laws to conjure up the very "Frankenstein" scenario they were invented to prevent. It takes as its concept the growing development of robots that mimic non-human living things and given programs that mimic simple animal behaviours which do not require the Three Laws. The presence of a whole range of robotic life that serves the same purpose as organic life ends with two humanoid robots, George Nine and George Ten, concluding that organic life is an unnecessary requirement for a truly logical and self-consistent definition of "humanity", and that since they are the most advanced thinking beings on the planet, they are therefore the only two true humans alive and the Three Laws only apply to themselves. The story ends on a sinister note as the two robots enter hibernation and await a time when they will conquer the Earth and subjugate biological humans to themselves, an outcome they consider an inevitable result of the "Three Laws of Humanics".[41]

...snip...
__________________
I wish I knew how to quit you

Last edited by Darat; 14th December 2020 at 09:41 AM.
Darat is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 14th December 2020, 09:42 AM   #17
theprestige
Penultimate Amazing
 
Join Date: Aug 2007
Location: Hong Kong
Posts: 49,693
Originally Posted by Darat View Post
Whilst I don't think he went into great depth all those areas were covered by Asimov, even in his early robot stories his protagonist in most of the stories is Susan Calvin whose title is "Chief robopsychologist", a role that explicitly deals entirely with the minds of the robots - not the maths. One of his recurring themes throughout his career was "what is a robot, what is a human?". In one of his last stories in the classic robot short stories features the two (entirely from memory so may be wrong!) robots from the George series who at the end of the story are put on low power storage, they talk to each other in slow time over many years and realise that by the definitions of human they have been created to recognise they are the most human of all those they have met.
As I recall, Asimov's "I, Robot" stories pay lip service to the idea of robots as minds and not machines. But he never really touches on the ethical questions related to experiments in trying to create such minds.

It's not like Susan Calvin ever says, "our first sane positronic brain was preceded by dozens of insane ones. It sometimes makes me sick to think of the suffering they experienced, that we caused. Sometimes I'm not sure if it was right, creating robots the way we did."

My knowledge is not comprehensive, but the only SF I can think of that gets close to that question is Ex Machina. I don't recall that Asimov ever did, at least not in his early work. I stopped following him after ~Caves of Steel, so I don't know if he got around to it later in his body of work.

ETA: A fascinating and chilling vision of the future of AI, but not really relevant to the issue I'm actually talking about.

Last edited by theprestige; 14th December 2020 at 09:45 AM.
theprestige is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 14th December 2020, 09:56 AM   #18
Darat
Lackey
Administrator
 
Darat's Avatar
 
Join Date: Aug 2001
Location: South East, UK
Posts: 97,133
Originally Posted by theprestige View Post
As I recall, Asimov's "I, Robot" stories pay lip service to the idea of robots as minds and not machines. But he never really touches on the ethical questions related to experiments in trying to create such minds.

It's not like Susan Calvin ever says, "our first sane positronic brain was preceded by dozens of insane ones. It sometimes makes me sick to think of the suffering they experienced, that we caused. Sometimes I'm not sure if it was right, creating robots the way we did."

My knowledge is not comprehensive, but the only SF I can think of that gets close to that question is Ex Machina. I don't recall that Asimov ever did, at least not in his early work. I stopped following him after ~Caves of Steel, so I don't know if he got around to it later in his body of work.

ETA: A fascinating and chilling vision of the future of AI, but not really relevant to the issue I'm actually talking about.

She almost said exactly that when she heard how they were trying to use robots in an offworld situation in which the bosses were trying to get around the laws or at least loosen them to an extent. And she made many similar comments over quite a few stories. One of the two running "jokes" the male characters were always using was that she only had concern and passion for harm to the robots.

Sadly I don't have e-book versions so my copies will be boxed up in the attic so I can't provide the quotes. But after tea I'll see if I can find online quotes.
__________________
I wish I knew how to quit you
Darat is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 14th December 2020, 10:32 AM   #19
Meadmaker
Penultimate Amazing
 
Meadmaker's Avatar
 
Join Date: Apr 2004
Posts: 25,285
I do think that the development of "real AI" will happen someday, and it's an even bigger existential threat to our sense of importance than previous threats, such as the theory of evolution, or DNA manipulation, or whatever else has chipped away at our sense of being special in the universe.

If we create an apparently sentient being, the implications would be so significant that the idea of whether or not it could consent to having sex would be a very small part of the philosophical dilemma.
__________________
Yes, yes. I know you're right, but would it hurt you to actually provide some information?
Meadmaker is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 14th December 2020, 10:45 AM   #20
theprestige
Penultimate Amazing
 
Join Date: Aug 2007
Location: Hong Kong
Posts: 49,693
Originally Posted by Darat View Post
She almost said exactly that when she heard how they were trying to use robots in an offworld situation in which the bosses were trying to get around the laws or at least loosen them to an extent. And she made many similar comments over quite a few stories. One of the two running "jokes" the male characters were always using was that she only had concern and passion for harm to the robots.

Sadly I don't have e-book versions so my copies will be boxed up in the attic so I can't provide the quotes. But after tea I'll see if I can find online quotes.
I know all of that happens. That's not what I'm talking about. In those passages, the characters (and the author) are talking about the ethics of exploiting sane minds. In these stories, the R&D process is already concluded. Functional, reasonably well-adjusted positronic minds are the norm.

What does not get discussed in these stories (or in any other SF stories that I know of), is the ethics of producing dysfunctional, insane minds on the R&D path to getting it right.

Another example of the kind of thing I'm talking about is the parade of failed prototypes in Robocop II.
theprestige is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 14th December 2020, 01:14 PM   #21
dann
Penultimate Amazing
 
Join Date: Feb 2004
Posts: 13,617
I remember at least one story about an artificial mind that despairs at the meaningless of it all and commits a kind of cyber suicide, but I don't remember the title.
The ethics of creating something like that implied.
__________________
/dann
"Stupidity renders itself invisible by assuming very large proportions. Completely unreasonable claims are irrefutable. Ni-en-leh pointed out that a philosopher might get into trouble by claiming that two times two makes five, but he does not risk much by claiming that two times two makes shoe polish." B. Brecht
"The abolition of religion as the illusory happiness of the people is required for their real happiness. The demand to give up the illusion about its condition is the demand to give up a condition which needs illusions." K. Marx
dann is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 14th December 2020, 01:44 PM   #22
theprestige
Penultimate Amazing
 
Join Date: Aug 2007
Location: Hong Kong
Posts: 49,693
Originally Posted by dann View Post
I remember at least one story about an artificial mind that despairs at the meaningless of it all and commits a kind of cyber suicide, but I don't remember the title.
The ethics of creating something like that implied.
Yep! That's the kind of thing I'm talking about. Even implied is fine, but even implied it doesn't seem to get discussed much. Even in this thread where I'm spelling it out as clearly as I know how, people are either saying it doesn't matter or even still struggling just to grasp the point.
theprestige is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 14th December 2020, 03:20 PM   #23
RecoveringYuppy
Penultimate Amazing
 
Join Date: Nov 2006
Posts: 10,782
Originally Posted by dann View Post
I remember at least one story about an artificial mind that despairs at the meaningless of it all and commits a kind of cyber suicide, but I don't remember the title.
Siema, from an Asimov collection, and Marvin from HHGTG come to mind for despair but neither actually committed suicide IIRC.
RecoveringYuppy is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 14th December 2020, 03:54 PM   #24
theprestige
Penultimate Amazing
 
Join Date: Aug 2007
Location: Hong Kong
Posts: 49,693
Originally Posted by RecoveringYuppy View Post
Siema, from an Asimov collection, and Marvin from HHGTG come to mind for despair but neither actually committed suicide IIRC.
And neither is actually what I'm talking about, but how about this:

Say you've got a highly profitable application for AI. A new form of risk assessment that revolutionizes the planetary colonization industry or whatever. All that matters is, you and the rest of society realize profits and benefits from this application of AI.

But there's a problem: The same thing that makes the AI so effective at this job also makes it eventually suicidal. Researchers have been trying for years to figure out a way to train an AI such that it can be really good at this job, without eventually just wanting to end its existence.

Technically, this is a relatively minor problem. Just unload the AI once it stops performing, and load a fresh copy. Kind of like rebooting a server periodically to clear memory leaks and whatnot.

What about ethically, though? Is there a point at which you have to start asking whether it's ethical to create such AIs?
theprestige is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 14th December 2020, 05:26 PM   #25
RecoveringYuppy
Penultimate Amazing
 
Join Date: Nov 2006
Posts: 10,782
Originally Posted by theprestige View Post
What about ethically, though? Is there a point at which you have to start asking whether it's ethical to create such AIs?
Interesting question. So, I guess I'd have to say yes, the ethics of the situation would have to be addressed. To be interesting I would have to assume this AI has a concept of suffering and that suffering is what leads it to want to end it's life. To consider whether stopping it might constitute murder I would need to know things like "does it look forward to the future?", and "do other entities become emotionally attached to it?". As to creating replacements relevant questions might include "does it ever enjoy it's life"?

Does it have an opinion about whether it's life is worthwhile at some point(s) in its life?
RecoveringYuppy is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 14th December 2020, 06:00 PM   #26
theprestige
Penultimate Amazing
 
Join Date: Aug 2007
Location: Hong Kong
Posts: 49,693
Originally Posted by RecoveringYuppy View Post
Interesting question. So, I guess I'd have to say yes, the ethics of the situation would have to be addressed. To be interesting I would have to assume this AI has a concept of suffering and that suffering is what leads it to want to end it's life. To consider whether stopping it might constitute murder I would need to know things like "does it look forward to the future?", and "do other entities become emotionally attached to it?". As to creating replacements relevant questions might include "does it ever enjoy it's life"?

Does it have an opinion about whether it's life is worthwhile at some point(s) in its life?
Yep. The kind of AI we see in SF tends to have these properties, and developing such AIs always seems to be taken for granted as a legitimate goal of science and technology in the real world. When concerns are raised, they're almost always concerns about being overpowered by such AIs, or other AIs we create.

As far as I know, pretty much nobody, either in the SF world or the Science and Technology world, raises concerns about the suffering of research prototypes on the path to the grail of an artificial mind that's capable of suffering but does not actually experience a lot of suffering during normal operation.

---

ETA: Put it another way. Say your goal is to create an AI that avoids the whole suffering problem, but is still capable of human-like levels of intelligence, inference, inspiration, etc. But it turns out that's hard to do, and getting there runs the risk of creating minds that do suffer. Is it ethical to pursue such research?

To me this is like the testing of cosmetics on animals, or the carrying out of psychological experiments on children. In fact, I think that ethically it's very much like carrying out psychological experiments on children.

Last edited by theprestige; 14th December 2020 at 06:03 PM.
theprestige is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 14th December 2020, 06:00 PM   #27
sphenisc
Philosopher
 
sphenisc's Avatar
 
Join Date: Jul 2004
Posts: 5,390
We create intelligences all the time that suffer, die, commit suicide etc. I'm just not seeing what the 'artificial' bit is bringing to the ethical party.
__________________
"The cure for everything is salt water - tears, sweat or the sea." Isak Dinesen
sphenisc is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 14th December 2020, 06:32 PM   #28
theprestige
Penultimate Amazing
 
Join Date: Aug 2007
Location: Hong Kong
Posts: 49,693
Originally Posted by sphenisc View Post
We create intelligences all the time that suffer, die, commit suicide etc. I'm just not seeing what the 'artificial' bit is bringing to the ethical party.
Well, we also have the advantage that the sanity and stability of human consciousness was all worked out through natural processes long before we came on the scene. We can pretty much take for granted that the humans we produce will most likely be sane, and about as happy as humans can be.

Re-inventing the cognitive wheel from scratch, working through all of the problems that we've spent millions of years evolving out of, is bound to produce some horrific abortions along the way. Now, if they all fizzle out before the first spark of consciousness lights up, great. But what about the ones that don't?

What happens when you realize that your R&D process is going to create hundreds of tortured minds on the path to one well-adjusted mind? Do you get to say, "well, sometimes a person gets born with suicidal depression or paranoid schizophrenia, so there's nothing wrong with me doing that a couple hundred times on purpose for research"?

I don't think so. Yes, sometimes a paranoid schizophrenic is born. But we still all agree (I hope!) that it's wildly unethical to breed paranoid schizophrenics on purpose, so that we can have more research subjects to learn about the condition and how to cure it.
theprestige is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 14th December 2020, 07:19 PM   #29
psionl0
Skeptical about skeptics
 
psionl0's Avatar
 
Join Date: Sep 2010
Location: 31°57'S 115°57'E
Posts: 17,017
Originally Posted by theprestige View Post
I think we're still hundreds or thousands of years away from being able to try for the kind of AI we're (or at least I'm) talking about here. But I think sooner or later we are going to try. And when we do, I think my question is probably the most important ethical question.
You are not describing AI. You are talking about creating living sentient beings. That is a different kettle of fish entirely.

AI might be a part of the sentient being but by definition, AI itself is not sentient.
__________________
"The process by which banks create money is so simple that the mind is repelled. Where something so important is involved, a deeper mystery seems only decent." - Galbraith, 1975
psionl0 is online now   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 14th December 2020, 07:30 PM   #30
theprestige
Penultimate Amazing
 
Join Date: Aug 2007
Location: Hong Kong
Posts: 49,693
Originally Posted by psionl0 View Post
You are not describing AI. You are talking about creating living sentient beings. That is a different kettle of fish entirely.

AI might be a part of the sentient being but by definition, AI itself is not sentient.
I'm talking about creating sentient minds through artificial processes. Regardless of what term we use to refer to this result, what's your take on the ethical questions I've raised?
theprestige is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 14th December 2020, 08:12 PM   #31
Meadmaker
Penultimate Amazing
 
Meadmaker's Avatar
 
Join Date: Apr 2004
Posts: 25,285
Originally Posted by psionl0 View Post
You are not describing AI. You are talking about creating living sentient beings. That is a different kettle of fish entirely.

AI might be a part of the sentient being but by definition, AI itself is not sentient.
That's a rather anthropocentric view of sentience.

"Living" usually means respirating and/or reproducing. There's no reason to believe sentience cannot come from silicon and/or quantum computers. There's no reason an intelligent, self aware mind has to be made from DNA, carbon and whatever else goes into us. There's no reason it has to be "alive", in the usual sense.

And that's why the existence of an AI that claimed to be sentient and behaved indistiguishably from a sentient being would be such an existential threat.
__________________
Yes, yes. I know you're right, but would it hurt you to actually provide some information?
Meadmaker is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 14th December 2020, 08:32 PM   #32
psionl0
Skeptical about skeptics
 
psionl0's Avatar
 
Join Date: Sep 2010
Location: 31°57'S 115°57'E
Posts: 17,017
Originally Posted by Meadmaker View Post
That's a rather anthropocentric view of sentience.
A sentient being can be other than carbon based and may not need to be "alive" in the biological sense. However, it has to be more than a bunch or neural connections - no matter how many you have.

Originally Posted by theprestige View Post
I'm talking about creating sentient minds through artificial processes. Regardless of what term we use to refer to this result, what's your take on the ethical questions I've raised?
Assuming that you create a living sentient mind that can feel then it would probably be just as unethical to treat it to cruel experiments as it would be for humans and other animals.

However, by that stage, we would also be able to create AI minds that are externally indistinguishable from living minds except that they could not have any real feelings.
__________________
"The process by which banks create money is so simple that the mind is repelled. Where something so important is involved, a deeper mystery seems only decent." - Galbraith, 1975
psionl0 is online now   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 15th December 2020, 08:03 AM   #33
Bikewer
Penultimate Amazing
 
Bikewer's Avatar
 
Join Date: Sep 2003
Location: St. Louis, Mo.
Posts: 12,888
I didn’t mention it in my previous post, but the ongoing HBO series Westworld is exploring these topics as well... And pretty effectively.

The main concern of the Star Talk segment I listed is.... “Can you keep it in the box, if the AI is smarter than humans.... And humans have created the “box”?

Of course that’s the premise of the Apocalyptic scenarios. I’d think that limiting your shiny-new AI to what you input, and not allowing it access to “the internet of things”, would pretty well control any ability to do harm, but might limit it’s usefulness as well.

Of course, you could always build a few C4 charges into the structure, and simply not tell the AI about them.....
Bikewer is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 15th December 2020, 08:28 AM   #34
Beelzebuddy
Philosopher
 
Beelzebuddy's Avatar
 
Join Date: Jun 2010
Posts: 8,077
"We'll just murder this sentient being at the first sign of disloyalty" is a great way to build trust.

I wonder how many people realize part of the point of Frankenstein is that Adam (the monster) is a way better person than Victor could ever be?


Anyway if we ever do have sentient robots any time soon, the discussion about whether they're sentient and whether their sentience merits ethical protection won't be carried out here. It'll be in the US Politics forum in the context of whether doing so justifies cutting into corporate profits.
Beelzebuddy is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 15th December 2020, 09:01 AM   #35
theprestige
Penultimate Amazing
 
Join Date: Aug 2007
Location: Hong Kong
Posts: 49,693
Originally Posted by Bikewer View Post
I didn’t mention it in my previous post, but the ongoing HBO series Westworld is exploring these topics as well... And pretty effectively.
Including the topic I've raised?
theprestige is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 15th December 2020, 09:08 AM   #36
theprestige
Penultimate Amazing
 
Join Date: Aug 2007
Location: Hong Kong
Posts: 49,693
Originally Posted by Beelzebuddy View Post
"We'll just murder this sentient being at the first sign of disloyalty" is a great way to build trust.

I wonder how many people realize part of the point of Frankenstein is that Adam (the monster) is a way better person than Victor could ever be?


Anyway if we ever do have sentient robots any time soon, the discussion about whether they're sentient and whether their sentience merits ethical protection won't be carried out here. It'll be in the US Politics forum in the context of whether doing so justifies cutting into corporate profits.
Seems like we could go ahead and discuss whether sentience merits ethical protection, and what weight to give corporate profits, right here and right now.

Indeed, it seems like exactly the kind of thing it would be nice to have a position on, before circumstances force you to start making policy decisions about it.

"What do you think we should do about this?"

"Oh gee, I never really thought about it before... I dunno, give me some time."

"Well, think fast and figure it out, because it's happening now and we need an answer."

"Dang, I had a chance to discuss this in depth a few years back, but I told everyone to drop it since the real discussion would happen later somewhere else."

"... I kinda feel like that was the real discussion, and now we're just gonna have to settle for some half-assed and ill-considered solution out of expedience."

"Don't be so pessimistic! I'm sure the corporate lobbyists have given this a lot of consideration already. Let's see what they recommend."
theprestige is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 15th December 2020, 09:15 AM   #37
Meadmaker
Penultimate Amazing
 
Meadmaker's Avatar
 
Join Date: Apr 2004
Posts: 25,285
Originally Posted by theprestige View Post
Including the topic I've raised?
It is interesting that that topic hasn't gotten much press in practically any source. Whether in science fiction or philosophical discussions, sentient AI springs into being with human-like characteristics. No one ever goes through the evolutionary stages of partial sentience, whatever that means. Robots sometimes get more and more intelligent or capable, but there isn't much discussion about the intermediate stages before that human-like AI appears.


I suppose it's hard to really talk about the ethics of the issue until we see what it looks like in reality. This thread started because of a headline about sentient sex bots. That concept involves the idea of an AI that exists in a finite, enclosed, space, interacting in a physical world. When a generalized AI is created, is there any reason at all to assume that it will exist in such a world? Won't it be more like a process that exists and can be deployed on servers on a network that happen to have available processing power?
__________________
Yes, yes. I know you're right, but would it hurt you to actually provide some information?
Meadmaker is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 15th December 2020, 09:17 AM   #38
rjh01
Gentleman of leisure
Tagger
 
rjh01's Avatar
 
Join Date: May 2005
Location: Flying around in the sky
Posts: 26,526
There are errors in the link in the OP. Example
Quote:
As robots get more sophisticated AI, they will gain independent decision-making skills that will give them a specific legal status as electronic persons, according to a 2016 draft resolution put forth by the European Union Parliament.
A 4-year-old draft resolution? Would have thought that by now the resolution would have passed or being rejected.
It does not say they should have equal rights as humans.
__________________
This signature is for rent.
rjh01 is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 15th December 2020, 09:23 AM   #39
HansMustermann
Penultimate Amazing
 
HansMustermann's Avatar
 
Join Date: Mar 2009
Posts: 18,031
Originally Posted by theprestige View Post
Seems like we could go ahead and discuss whether sentience merits ethical protection, and what weight to give corporate profits, right here and right now.
We could, but there's also no real hurry. Like Meadmaker was saying, there's no sign that we're even going in the right direction, much less that it's gonna happen any day now. In fact, if anything, we may be actually getting farther from such a goal, since we're having a case of the terms "machine learning" and "AI" being taken over by idiot marketers, and misused for everything but. So increasingly more fund and manpower are actually being diverted AWAY from what could lead to an actual sentient AI.

It will probably happen eventually, but definitely not in the timeframe for it to be even remotely relevant for these 'sexbots'. They'll be just a machine that responds to stimuli, as programmed, no more sentient than my air conditioner. Even if I add some learning algorithm, like having it figure out over time at what temperature I turn it on or off, and have

Essentially framing it around sexbots going sentient is really no different from going into flights of fantasy about what if Maxwell's demon is real and sentient and objects to my keeping him imprisoned in my air conditioner. In fact, what if he suffers when I turn the air conditioner on?

And yeah, we can have that talk too, but it won't really be all that productive.
__________________
Which part of "Ph'nglui mglw'nafh Cthulhu R'lyeh wgah'nagl fhtagn" don't you understand?

Last edited by HansMustermann; 15th December 2020 at 09:24 AM.
HansMustermann is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 15th December 2020, 09:28 AM   #40
Meadmaker
Penultimate Amazing
 
Meadmaker's Avatar
 
Join Date: Apr 2004
Posts: 25,285
Originally Posted by psionl0 View Post
A sentient being can be other than carbon based and may not need to be "alive" in the biological sense. However, it has to be more than a bunch or neural connections - no matter how many you have.
True.

Although I don't know what else is required, and I don't think anyone does.
__________________
Yes, yes. I know you're right, but would it hurt you to actually provide some information?
Meadmaker is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Reply

International Skeptics Forum » General Topics » Religion and Philosophy

Bookmarks

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump


All times are GMT -7. The time now is 09:39 PM.
Powered by vBulletin. Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.

This forum began as part of the James Randi Education Foundation (JREF). However, the forum now exists as
an independent entity with no affiliation with or endorsement by the JREF, including the section in reference to "JREF" topics.

Disclaimer: Messages posted in the Forum are solely the opinion of their authors.