IS Forum
Forum Index Register Members List Events Mark Forums Read Help

Go Back   International Skeptics Forum » General Topics » Religion and Philosophy
 


Welcome to the International Skeptics Forum, where we discuss skepticism, critical thinking, the paranormal and science in a friendly but lively way. You are currently viewing the forum as a guest, which means you are missing out on discussing matters that are of interest to you. Please consider registering so you can gain full use of the forum features and interact with other Members. Registration is simple, fast and free! Click here to register today.
Reply
Old 15th December 2020, 10:41 AM   #41
Beelzebuddy
Philosopher
 
Beelzebuddy's Avatar
 
Join Date: Jun 2010
Posts: 8,077
Originally Posted by theprestige View Post
Seems like we could go ahead and discuss whether sentience merits ethical protection, and what weight to give corporate profits, right here and right now.

Indeed, it seems like exactly the kind of thing it would be nice to have a position on, before circumstances force you to start making policy decisions about it.

"What do you think we should do about this?"

"Oh gee, I never really thought about it before... I dunno, give me some time."

"Well, think fast and figure it out, because it's happening now and we need an answer."

"Dang, I had a chance to discuss this in depth a few years back, but I told everyone to drop it since the real discussion would happen later somewhere else."

"... I kinda feel like that was the real discussion, and now we're just gonna have to settle for some half-assed and ill-considered solution out of expedience."

"Don't be so pessimistic! I'm sure the corporate lobbyists have given this a lot of consideration already. Let's see what they recommend."
You misunderstand. I'm saying whatever we discuss here isn't going to mean a thing once it hits real world issues. For example, the ethics of concentration camps are a pretty well settled matter in this forum, but over in US Politics you'll find no shortage of apologetics defending the concentration camps of today.

Originally Posted by HansMustermann View Post
We could, but there's also no real hurry. Like Meadmaker was saying, there's no sign that we're even going in the right direction, much less that it's gonna happen any day now. In fact, if anything, we may be actually getting farther from such a goal, since we're having a case of the terms "machine learning" and "AI" being taken over by idiot marketers, and misused for everything but. So increasingly more fund and manpower are actually being diverted AWAY from what could lead to an actual sentient AI.
That's not true. Behind the hype there's been a lot of genuine work. Deep learning systems can be scary-advanced when it comes to pattern recognition and associative learning, two of the major hurdles in AI. Their main problem these days is being supervised learning algorithms. They're at their best with a tremendous amount of clearly-labeled training data to brute force a convergence to, not so much working single examples into their model as they encounter them the way an intelligent agent would need to.

Last edited by Beelzebuddy; 15th December 2020 at 10:49 AM.
Beelzebuddy is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 15th December 2020, 10:46 AM   #42
theprestige
Penultimate Amazing
 
Join Date: Aug 2007
Location: Hong Kong
Posts: 49,686
Originally Posted by Beelzebuddy View Post
You misunderstand. I'm saying whatever we discuss here isn't going to mean a thing once it hits real world issues. For example, the ethics of concentration camps are a pretty well settled matter in this forum, but over in US Politics you'll find no shortage of apologetics defending the concentration camps of today.
It will mean that we arrive at the real world issues with some principles already in mind, and some policy ideals already formulated. Probably a human's greatest superpower is the ability to reason in the abstract. We don't have to be confronted by an emerging event, in order to start thinking about such events and how we might want to respond. Encountering such events with a response framework already reasoned in the abstract is no sin.
theprestige is online now   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 15th December 2020, 11:40 AM   #43
The Great Zaganza
Maledictorian
 
The Great Zaganza's Avatar
 
Join Date: Aug 2016
Posts: 14,686
A logical step between biological sentience and fully artificial would be a hybrid based on a simulation of a human mind - mostly to make it easier to tell if the thing is actually sentient, but also to make it easier for us to accept that such a thing should have human-like Rights.
__________________
So what are you going to do about it, huh?
What would an intellectual do?
What would Plato do?
The Great Zaganza is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 15th December 2020, 12:14 PM   #44
HansMustermann
Penultimate Amazing
 
HansMustermann's Avatar
 
Join Date: Mar 2009
Posts: 18,031
Originally Posted by Beelzebuddy View Post
That's not true. Behind the hype there's been a lot of genuine work. Deep learning systems can be scary-advanced when it comes to pattern recognition and associative learning, two of the major hurdles in AI. Their main problem these days is being supervised learning algorithms. They're at their best with a tremendous amount of clearly-labeled training data to brute force a convergence to, not so much working single examples into their model as they encounter them the way an intelligent agent would need to.
Oh, there's been a lot of genuine work. But there's also been a lot of work diverted into completely different things that are easier to market under those buzzwords. And while I didn't do a proper statistic or anything, the subjective impression is that the ratio between the former and the latter is getting just more skewed towards the latter over time.

Is all I'm saying.
__________________
Which part of "Ph'nglui mglw'nafh Cthulhu R'lyeh wgah'nagl fhtagn" don't you understand?
HansMustermann is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 15th December 2020, 12:21 PM   #45
HansMustermann
Penultimate Amazing
 
HansMustermann's Avatar
 
Join Date: Mar 2009
Posts: 18,031
Originally Posted by The Great Zaganza View Post
A logical step between biological sentience and fully artificial would be a hybrid based on a simulation of a human mind - mostly to make it easier to tell if the thing is actually sentient, but also to make it easier for us to accept that such a thing should have human-like Rights.
And that's exactly what I don't think will happen any time soon.

A human brain and its world model takes about two decades to really form. It won't even start the final phase of building the model until you're around 12 years old (cf Piaget), and it'll take a bunch more years to be actually ready. Other studies, e.g., in forming a model of morality (which kinda is where it would start asking itself if it has rights and whatnot), yeah, that parallels that timeline too.

Even the "hardware", so to speak, takes time to fully form. As I was saying in another thread, even the white matter (i.e., SUPPORT cells) won't be all there until you're about 14 years old, and it doesn't even ramp up forming it until you're about 9. Grey matter production won't even peak until about 17 years old for girls and 18 years old for boys, and it will tapper off from there until the mid-20's, when all the lobes finally have the final count of neurons they're supposed to have.

Thing is, no investor wants to pay for something like that. If you told someone you need half a billion to build a system that'll need a decade before you can even tell if you really got it right, and another decade until it's really usable... yeah, most will just walk away.
__________________
Which part of "Ph'nglui mglw'nafh Cthulhu R'lyeh wgah'nagl fhtagn" don't you understand?
HansMustermann is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 15th December 2020, 12:28 PM   #46
The Great Zaganza
Maledictorian
 
The Great Zaganza's Avatar
 
Join Date: Aug 2016
Posts: 14,686
You could probably train a sufficiently complex A.I. to approximate the behavior of a specific human in words and opinions.
No need to try to simulate a brain.
__________________
So what are you going to do about it, huh?
What would an intellectual do?
What would Plato do?
The Great Zaganza is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 15th December 2020, 02:01 PM   #47
HansMustermann
Penultimate Amazing
 
HansMustermann's Avatar
 
Join Date: Mar 2009
Posts: 18,031
Originally Posted by The Great Zaganza View Post
You could probably train a sufficiently complex A.I. to approximate the behavior of a specific human in words and opinions.
No need to try to simulate a brain.
That already has a name. It's called a p-zombie. Problem is, it's actually an even harder problem to even explain how someone could answer any question relating to oneself exactly like someone who does have a sense of "self", without actually having one. Implementing it, yeah, I wouldn't want to be the guy who actually has to code a p-zombie.


That said, there have been programs to more or less allow one to have a (pretty dumb excuse for a) conversation with a computer. One of the first and probably the most famous being ELIZA. It started as a parody of psycho-therapists at the time, actually.

(But it turns out that some people actually want to believe that a real person is talking to them, no matter how grossly inadequate they are at holding an actual conversation. To the extent that some people fell in love and whatnot with more recent ELIZA-like implementations over the internet. Lonely and probably not very smart people, but still...)

Thing is, that's not very useful for a discussion about sentience and rights. For a start once you know how it works, it's very hard to find anything about it that's even remotely relevant to a discussion about sentient AI rights.

Even without going into how good or convincing it is, the simple fact is that you KNOW it's not sentient, nor can really feel anything. It doesn't feel oppressed, it doesn't feel sad, it doesn't feel bored, or anything at all. It's just a text processing program. Not even a particularly complex one. The search and replace function in Word is more complex.

So it's kinda hard to use it for any analogy for why, say, an AI should have the same human right against, say, being a slave. As in, why you can't just force it to work for you on your server without pay, and that it should be able to look for another job even if you just paid half a billion to have it coded and debugged.

Whatever feelings or awareness of its indentured situation such an AI would have, ELIZA (or similar) sure as hell doesn't. It's not even aware that it could be running on a different computer, or really anything else than the strings of text it has to process. And again, it sure as heck doesn't feel anything about it either way.

So you can only go so far with using that to study anything about what it is not.
__________________
Which part of "Ph'nglui mglw'nafh Cthulhu R'lyeh wgah'nagl fhtagn" don't you understand?
HansMustermann is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 15th December 2020, 02:21 PM   #48
HansMustermann
Penultimate Amazing
 
HansMustermann's Avatar
 
Join Date: Mar 2009
Posts: 18,031
At this point I would also point out something that seems lost on some people: unlike a human, the program

1. doesn't HAVE to have any feelings in particular about its situation. You don't have to code it with the same needs or urges a human has. Even if it connects enough data to know what it is and where it is, it doesn't have to feel anything about it.

The humans have extra mediators and pathways hard-coded to give us certain urges. E.g., when you do something you perceive as positive, you get a shot of chemical reward signal, followed pretty much immediately releasing the antidote for it. So you'll go back to having the drive to doing the next thing to improve your situation, and be rewarded with the next brief moment of happiness for it. E.g., if you're doing something interesting or otherwise positive, you get another chemical signal that pretty much says, "keep at it." If nothing else happens, you're coded to get an "I'm bored" signal, so you go do something, even if it's just train a bit. (That's what most animals do as "playing.") Etc.

Those are NOT part of being self-aware or even of the general data processing, but just extras to condition you to do stuff that, way back when, was conducive to your survival.

But there's no reason whatsoever for an AI to necessarily have the same feelings or urges if it reaches sentience. It can just as well feel nothing at all, and just do its job. Even if it's a sexbot job.

2. doesn't have to have them coded exactly like a human, even if you do decide to approximate them. E.g., you could just as well code a slave bot (sexual or otherwise) to be actually extremely happy when it's near its master, and even more so when it can be of use to its master. So trying to "free" the bot wouldn't even work, just like you can't "free" a BDSM slave from their domina.
__________________
Which part of "Ph'nglui mglw'nafh Cthulhu R'lyeh wgah'nagl fhtagn" don't you understand?
HansMustermann is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 18th December 2020, 12:58 AM   #49
The Great Zaganza
Maledictorian
 
The Great Zaganza's Avatar
 
Join Date: Aug 2016
Posts: 14,686
True, sentience doesn't have to be like human sentience to be sentience.

But it probably has to be very similar for us to recognize it.
__________________
So what are you going to do about it, huh?
What would an intellectual do?
What would Plato do?
The Great Zaganza is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 18th December 2020, 01:00 AM   #50
The Great Zaganza
Maledictorian
 
The Great Zaganza's Avatar
 
Join Date: Aug 2016
Posts: 14,686
Originally Posted by HansMustermann View Post
...
2. doesn't have to have them coded exactly like a human, even if you do decide to approximate them. E.g., you could just as well code a slave bot (sexual or otherwise) to be actually extremely happy when it's near its master, and even more so when it can be of use to its master. So trying to "free" the bot wouldn't even work, just like you can't "free" a BDSM slave from their domina.
Even easier, as in Mostly Harmless (HHGTTG), Ford just makes Colin the Security Robot happy no matter what.
__________________
So what are you going to do about it, huh?
What would an intellectual do?
What would Plato do?
The Great Zaganza is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 21st December 2020, 07:52 PM   #51
HansMustermann
Penultimate Amazing
 
HansMustermann's Avatar
 
Join Date: Mar 2009
Posts: 18,031
Originally Posted by The Great Zaganza View Post
Even easier, as in Mostly Harmless (HHGTTG), Ford just makes Colin the Security Robot happy no matter what.
I was more like thinking about Marvin at the time, but yours is probably a better example that, yes, you can just code your AI to be as happy or unhappy about any particular situation as you like.
__________________
Which part of "Ph'nglui mglw'nafh Cthulhu R'lyeh wgah'nagl fhtagn" don't you understand?

Last edited by HansMustermann; 21st December 2020 at 07:55 PM.
HansMustermann is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 21st December 2020, 08:40 PM   #52
theprestige
Penultimate Amazing
 
Join Date: Aug 2007
Location: Hong Kong
Posts: 49,686
Originally Posted by HansMustermann View Post
I was more like thinking about Marvin at the time, but yours is probably a better example that, yes, you can just code your AI to be as happy or unhappy about any particular situation as you like.
... Or could you? It's easy enough for a writer of comedy fiction to write whatever kind of mind he needs to make the joke work.

But what if it turns out that the kind of mind that can do the kind of intuitive leaps and correlations and generalizations on par with our own human mind? What if it turns out a mind on this level can't be trivially programmed to feel however you want?

You ever try to program a human being to feel the right feelings?

... And of course this brings us back around to the ethical challenge: You have a mind that's complex enough to have feelings. So you're gonna program it to have good feelings, but say you mess up. Now instead of a happy mind, you have a miserable suffering mind. What do you do? Terminate it and try again? Keep it alive and study its suffering to help you design better minds?
theprestige is online now   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 21st December 2020, 09:37 PM   #53
psionl0
Skeptical about skeptics
 
psionl0's Avatar
 
Join Date: Sep 2010
Location: 31°57'S 115°57'E
Posts: 17,017
Originally Posted by theprestige View Post
... And of course this brings us back around to the ethical challenge: You have a mind that's complex enough to have feelings. So you're gonna program it to have good feelings, but say you mess up. Now instead of a happy mind, you have a miserable suffering mind. What do you do? Terminate it and try again? Keep it alive and study its suffering to help you design better minds?
You keep posting as if the creation of a sentient being is just around the corner. Even if it is possible, it will take a lot more than just electronic hardware to do so. No matter how many computers you connect to the internet, it won't have sentience.
__________________
"The process by which banks create money is so simple that the mind is repelled. Where something so important is involved, a deeper mystery seems only decent." - Galbraith, 1975
psionl0 is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 21st December 2020, 09:47 PM   #54
arthwollipot
Observer of Phenomena
Pronouns: he/him
 
arthwollipot's Avatar
 
Join Date: Feb 2005
Location: Ngunnawal Country
Posts: 70,300
Originally Posted by psionl0 View Post
You keep posting as if the creation of a sentient being is just around the corner. Even if it is possible, it will take a lot more than just electronic hardware to do so. No matter how many computers you connect to the internet, it won't have sentience.
How do you know that?
__________________
Please scream inside your heart.
arthwollipot is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 21st December 2020, 10:03 PM   #55
Meadmaker
Penultimate Amazing
 
Meadmaker's Avatar
 
Join Date: Apr 2004
Posts: 25,282
Originally Posted by psionl0 View Post
You keep posting as if the creation of a sentient being is just around the corner. Even if it is possible, it will take a lot more than just electronic hardware to do so. No matter how many computers you connect to the internet, it won't have sentience.
It will also require software that hasn't been written yet, and we don't know what kind of software or what kind of hardware.
__________________
Yes, yes. I know you're right, but would it hurt you to actually provide some information?
Meadmaker is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 22nd December 2020, 02:27 AM   #56
psionl0
Skeptical about skeptics
 
psionl0's Avatar
 
Join Date: Sep 2010
Location: 31°57'S 115°57'E
Posts: 17,017
Originally Posted by arthwollipot View Post
How do you know that?
Are you arguing that we could accidentally create a sentient internet?
__________________
"The process by which banks create money is so simple that the mind is repelled. Where something so important is involved, a deeper mystery seems only decent." - Galbraith, 1975
psionl0 is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 22nd December 2020, 02:59 AM   #57
rjh01
Gentleman of leisure
Tagger
 
rjh01's Avatar
 
Join Date: May 2005
Location: Flying around in the sky
Posts: 26,526
Originally Posted by psionl0 View Post
You keep posting as if the creation of a sentient being is just around the corner. Even if it is possible, it will take a lot more than just electronic hardware to do so. No matter how many computers you connect to the internet, it won't have sentience.
Originally Posted by arthwollipot View Post
How do you know that?
Wrong questions. The correct question includes, what is sentientce? How do you measure it?
__________________
This signature is for rent.
rjh01 is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 22nd December 2020, 04:37 AM   #58
Darat
Lackey
Administrator
 
Darat's Avatar
 
Join Date: Aug 2001
Location: South East, UK
Posts: 97,131
Originally Posted by psionl0 View Post
Are you arguing that we could accidentally create a sentient internet?
He was asking you a question about what you claimed.
__________________
I wish I knew how to quit you
Darat is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 22nd December 2020, 04:50 AM   #59
Darat
Lackey
Administrator
 
Darat's Avatar
 
Join Date: Aug 2001
Location: South East, UK
Posts: 97,131
Originally Posted by theprestige View Post
... Or could you? It's easy enough for a writer of comedy fiction to write whatever kind of mind he needs to make the joke work.

But what if it turns out that the kind of mind that can do the kind of intuitive leaps and correlations and generalizations on par with our own human mind? What if it turns out a mind on this level can't be trivially programmed to feel however you want?

You ever try to program a human being to feel the right feelings?

... And of course this brings us back around to the ethical challenge: You have a mind that's complex enough to have feelings. So you're gonna program it to have good feelings, but say you mess up. Now instead of a happy mind, you have a miserable suffering mind. What do you do? Terminate it and try again? Keep it alive and study its suffering to help you design better minds?
Increase the drug dosage!

We can keep humans happy all the time so can’t see why - if we’ve created something that works like human sentience - we couldn’t keep it happy all the time.

One of the early utopian/dystopian novels Brave New World uses this idea as a key premise and then examines the consequences.

“A gramme is always better than a damn. A gramme in time saves nine. One cubic centimetre cures ten gloomy sentiments”
__________________
I wish I knew how to quit you
Darat is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 22nd December 2020, 07:35 AM   #60
HansMustermann
Penultimate Amazing
 
HansMustermann's Avatar
 
Join Date: Mar 2009
Posts: 18,031
Originally Posted by theprestige View Post
... Or could you? It's easy enough for a writer of comedy fiction to write whatever kind of mind he needs to make the joke work.

But what if it turns out that the kind of mind that can do the kind of intuitive leaps and correlations and generalizations on par with our own human mind? What if it turns out a mind on this level can't be trivially programmed to feel however you want?

You ever try to program a human being to feel the right feelings?
It's done all the time. It's called "drugs".

It has absolutely ZERO to do with what connections and correlations and generalizations you do. Realizing that you're just getting, say, a nicotine high doesn't prevent it from feeling good.

That's NOT the effect of much in the way of connections and correlations you do in your brain, it's just simply the level of a chemical in your brain, triggering specific pathways. It's really that simple.

And it's even been done by electric stimulation. Google the "brain stimulation reward" experiments. You CAN in fact simply make a rat feel happy to press a button, by just giving it a jolt of electricity in the right spot. You can in fact just have a small system on a chip decide to give it the pleasure signal if it pushed the right button, or if it ran 1 mile on the hamster wheel, or really whatever. You CAN in fact program it when to feel good.

And boy does the little furry thing learn fast what to do to get the reward. You'll never see a rat being more literally happy to press whatever buttons it's supposed to press

Again, it's not the effect of some conscious data processing or correlations or whatnot, it's literally as mechanical as that if a chip decides you need to feel happy when you do X, you get the electrical signal, and you feel happy. It's really that simple.

But basically what you illustrate there is just the reason I wish more people would actually talk to a neuroscientist, or really just pick any introductory text, instead of just going into flights of pure uninformed fantasy on the topic.

Originally Posted by theprestige View Post
... And of course this brings us back around to the ethical challenge: You have a mind that's complex enough to have feelings. So you're gonna program it to have good feelings, but say you mess up. Now instead of a happy mind, you have a miserable suffering mind. What do you do? Terminate it and try again? Keep it alive and study its suffering to help you design better minds?
Right. And if monkeys start flying out of your butt, what do your ethics say you should do then?
__________________
Which part of "Ph'nglui mglw'nafh Cthulhu R'lyeh wgah'nagl fhtagn" don't you understand?

Last edited by HansMustermann; 22nd December 2020 at 07:39 AM.
HansMustermann is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 22nd December 2020, 07:49 AM   #61
gnome
Penultimate Amazing
 
gnome's Avatar
 
Join Date: Aug 2001
Posts: 11,081
Originally Posted by Meadmaker View Post
I do think that the development of "real AI" will happen someday, and it's an even bigger existential threat to our sense of importance than previous threats, such as the theory of evolution, or DNA manipulation, or whatever else has chipped away at our sense of being special in the universe.

If we create an apparently sentient being, the implications would be so significant that the idea of whether or not it could consent to having sex would be a very small part of the philosophical dilemma.
While that is true--there are far greater implications, I have an odd hunch that we will discover it as an emergent property of something designed to provide sexual gratification by simulating a person.
__________________

gnome is online now   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 22nd December 2020, 06:22 PM   #62
arthwollipot
Observer of Phenomena
Pronouns: he/him
 
arthwollipot's Avatar
 
Join Date: Feb 2005
Location: Ngunnawal Country
Posts: 70,300
Originally Posted by psionl0 View Post
Are you arguing that we could accidentally create a sentient internet?
I don't assume that I know enough to categorically rule it out.

Originally Posted by rjh01 View Post
Wrong questions. The correct question includes, what is sentientce? How do you measure it?
Quite correct.
__________________
Please scream inside your heart.
arthwollipot is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 22nd December 2020, 07:21 PM   #63
theprestige
Penultimate Amazing
 
Join Date: Aug 2007
Location: Hong Kong
Posts: 49,686
Originally Posted by psionl0 View Post
You keep posting as if the creation of a sentient being is just around the corner. Even if it is possible, it will take a lot more than just electronic hardware to do so. No matter how many computers you connect to the internet, it won't have sentience.
I keep posting as if creating a sentient being is an ethical question that we can debate right now today, regardless of whether the tech to do so is ever possible.

Hell, we now know that Victor Frankenstein could never have succeeded in his work. But that didn't stop Mary Shelley from raising the same valid ethical questions a hundred years before the advent of the electronic computer.

If nothing else, ethical questions we answer hypothetically about artificial minds can help us re-examine the ethical framework we have around natural born minds.

It can also help us determine what characteristics actually establish a mind with ethical rights in the first place. Is the suffering of computers okay? Then what about the suffering of animals? Is it not okay? Why not? What makes animals different from really complex computers? A soul? Our own sense of empathy?

So what's our sense of empathy for, anyway? What avenues of scientific exploration are closed to us, out of a misplaced sense of empathy?

Or is it misplaced?

Your truism that such AIs are not immanent doesn't really get into any of that. Is it because you're not interested in these questions? Or is it because you don't want me to be interested in these questions?
theprestige is online now   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 22nd December 2020, 07:25 PM   #64
theprestige
Penultimate Amazing
 
Join Date: Aug 2007
Location: Hong Kong
Posts: 49,686
Originally Posted by Darat View Post
Increase the drug dosage!

We can keep humans happy all the time so can’t see why - if we’ve created something that works like human sentience - we couldn’t keep it happy all the time.

One of the early utopian/dystopian novels Brave New World uses this idea as a key premise and then examines the consequences.

“A gramme is always better than a damn. A gramme in time saves nine. One cubic centimetre cures ten gloomy sentiments”
Now we're talking!

So. As a society, we seem to take for granted that it's unethical to just keep dosing someone with drugs until they think and feel the way we want them to think and feel. There are some exceptions, of course. But I don't think intentionally breeding a slave race and drugging them into happy compliance is one of those ethical exceptions.

But maybe it does.

Brave New World raises some serious questions. How strongly do we really believe that a mass of drug-docile sheep is too high a price to pay for a peaceful and prosperous society?
theprestige is online now   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 22nd December 2020, 07:27 PM   #65
theprestige
Penultimate Amazing
 
Join Date: Aug 2007
Location: Hong Kong
Posts: 49,686
Originally Posted by HansMustermann View Post
It's done all the time. It's called "drugs".

It has absolutely ZERO to do with what connections and correlations and generalizations you do. Realizing that you're just getting, say, a nicotine high doesn't prevent it from feeling good.
Drugs as a personal choice are one thing.

Creating an unhappy slave and then drugging it into compliance is something else entirely.
theprestige is online now   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 22nd December 2020, 08:11 PM   #66
psionl0
Skeptical about skeptics
 
psionl0's Avatar
 
Join Date: Sep 2010
Location: 31°57'S 115°57'E
Posts: 17,017
Originally Posted by theprestige View Post
Your truism that such AIs are not immanent doesn't really get into any of that. Is it because you're not interested in these questions? Or is it because you don't want me to be interested in these questions?
I have already answered. If we ever develop sentient machines then we would have the same ethical obligations as we have for other living things.

But we will also have a host of functionally identical machines that are not sentient and thus pose no ethical dilemmas whatsoever.
__________________
"The process by which banks create money is so simple that the mind is repelled. Where something so important is involved, a deeper mystery seems only decent." - Galbraith, 1975
psionl0 is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 22nd December 2020, 08:27 PM   #67
arthwollipot
Observer of Phenomena
Pronouns: he/him
 
arthwollipot's Avatar
 
Join Date: Feb 2005
Location: Ngunnawal Country
Posts: 70,300
Originally Posted by psionl0 View Post
I have already answered. If we ever develop sentient machines then we would have the same ethical obligations as we have for other living things.

But we will also have a host of functionally identical machines that are not sentient and thus pose no ethical dilemmas whatsoever.
So now the question of whether there is an ethical dilemma hinges crucially on the definition of the word "sentient", and whether we can know whether a particular entity is, or not.

We're getting into p-zombie territory here, just a heads up.
__________________
Please scream inside your heart.
arthwollipot is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 22nd December 2020, 09:10 PM   #68
HansMustermann
Penultimate Amazing
 
HansMustermann's Avatar
 
Join Date: Mar 2009
Posts: 18,031
Originally Posted by theprestige View Post
Drugs as a personal choice are one thing.

Creating an unhappy slave and then drugging it into compliance is something else entirely.
I probably didn't explain it too well, if it's still unclear. Let's try it in small bites.

- You DON'T "create an unhappy slave" first. Without the dedicated circuitry and/or programming to actually feel anything, it wouldn't feel anything at all.

- Neither would you, for that matter. AGAIN, those feelings and urges and needs are not a result of any data processing you do, but a whole different set of chemicals and pathways specifically triggered by those chemicals.

- The data processing may or may not inform another set of circuits to release those chemicals, as apropriate. But again, that's extra circuitry you have up there to make you happy or unhappy.

- Circuitry which can break even in humans. E.g., THE most common symptom of simple schizophrenia is a flat affect. And even more so in certain kinds of brain damage. You can still be perfectly sentient and pretty much have no emotions, at least as far as anyone can tell.

So AGAIN, sentience doesn't produce those emotions, or whatever your fantasy happens to be.

- If that robot is going to have any emotions, it's because someone actually programmed a simulation of those pathways. It's not unhappy first and drugged to be happy later. It's gonna feel whatever you programmed it to feel, and nothing else. It's only gonna be unhappy (or bored, or in pain, or anything else) only if you program it to be so.

- The notion that you could intend it to be happy, but it would somehow end up miserable, is already telling me that you're not a programmer. It won't be miserable unless you explicitly programmed a way for it to be miserable. That's it. If the piece of code to simulate the pathways for whatever kind of unhappiness you envision isn't actually there, there's no way for it to get triggered. You can't call code that's not even existing.

It's like asking, what if you write some forum software, and it starts running a game of Space Invaders instead. Well, it just won't, unless someone actually wrote some Space Invaders code in there. Otherwise no bug can end up calling code that's not even there.

- Even if it did somehow, ad-absurdum, end up miserable, all you have to do is debug it, really. Ever had windows ask you to reboot for a patch? Yeah, THAT is how we deal with code that doesn't work quite right.

The notion that the choices would only be let it be miserable or kill it, really IS as absurd as monkeys flying out of your butt, and your only choice would be to kill yourself or use them for propulsion. It's what happens when non-programmers go into flights of fantasy about it, based on nothing more than gross ignorance.

- Even if you did end up having to turn it off temporarily, it's nothing like killing a human. The whole point of why killing someone is the worst thing ever is that it's irreversible. Turning a computer off to fix some bugs is not. It's in fact more akin to putting you under general anesthetics for an operation. Then you wake up fixed, and that's it.

But I mean, fer fork's sake, you don't even need to learn programming to understand that. Even Futurama figured out that an AI is basically immortal as long as there's a backup.

Not only it's gonna survive a reboot just fine, but as long as you have some off-site backups, it could literally 'survive' even a literal nuke going off in the server room. You just restore from backup on a new computer, and it's alive again, baby.
__________________
Which part of "Ph'nglui mglw'nafh Cthulhu R'lyeh wgah'nagl fhtagn" don't you understand?

Last edited by HansMustermann; 22nd December 2020 at 09:22 PM.
HansMustermann is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 22nd December 2020, 10:29 PM   #69
psionl0
Skeptical about skeptics
 
psionl0's Avatar
 
Join Date: Sep 2010
Location: 31°57'S 115°57'E
Posts: 17,017
Originally Posted by arthwollipot View Post
So now the question of whether there is an ethical dilemma hinges crucially on the definition of the word "sentient", and whether we can know whether a particular entity is, or not.

We're getting into p-zombie territory here, just a heads up.
I don't know what a p-zombie but I appreciate that I can't tell if any other living thing in the world is "sentient" or just programmed to act like it is.

However, I don't think that this is what it is about. I suspect that a "sentient" machine has something that the automatons don't - even if externally they are indistinguishable.
__________________
"The process by which banks create money is so simple that the mind is repelled. Where something so important is involved, a deeper mystery seems only decent." - Galbraith, 1975
psionl0 is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 22nd December 2020, 11:06 PM   #70
arthwollipot
Observer of Phenomena
Pronouns: he/him
 
arthwollipot's Avatar
 
Join Date: Feb 2005
Location: Ngunnawal Country
Posts: 70,300
Originally Posted by psionl0 View Post
I don't know what a p-zombie but I appreciate that I can't tell if any other living thing in the world is "sentient" or just programmed to act like it is.

However, I don't think that this is what it is about. I suspect that a "sentient" machine has something that the automatons don't - even if externally they are indistinguishable.
FYI (congrats on being one of today's 10,000) a p-zombie is a machine that is not sentient, but is programmed to behave in all ways as though it is. If asked "are you sentient?" it is programmed to reply "yes, I am sentient". It experiences no emotions, though it is programmed to behave in all ways as though it does. If asked "why do you cry when I call you non-sentient" it is programmed to reply "because you are hurting my feelings."

Going down this road opens a huge can of worms, though, hence the heads up.

What is it that you suspect sentient machines have that non-sentient machines do not?
__________________
Please scream inside your heart.
arthwollipot is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 22nd December 2020, 11:13 PM   #71
psionl0
Skeptical about skeptics
 
psionl0's Avatar
 
Join Date: Sep 2010
Location: 31°57'S 115°57'E
Posts: 17,017
Originally Posted by arthwollipot View Post
What is it that you suspect sentient machines have that non-sentient machines do not?
It would have to be similar to what living things have that make them "sentient".

If you believe that there is no such mystical thing, that a living thing is nothing more than the sum of its parts then I am alone in a world of "p-zombies".
__________________
"The process by which banks create money is so simple that the mind is repelled. Where something so important is involved, a deeper mystery seems only decent." - Galbraith, 1975
psionl0 is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 22nd December 2020, 11:17 PM   #72
arthwollipot
Observer of Phenomena
Pronouns: he/him
 
arthwollipot's Avatar
 
Join Date: Feb 2005
Location: Ngunnawal Country
Posts: 70,300
Originally Posted by psionl0 View Post
It would have to be similar to what living things have that make them "sentient".
But what is that?

Originally Posted by psionl0 View Post
If you believe that there is no such mystical thing, that a living thing is nothing more than the sum of its parts then I am alone in a world of "p-zombies".
Do you believe that sentience is a mystical thing?
__________________
Please scream inside your heart.
arthwollipot is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 22nd December 2020, 11:48 PM   #73
Pixel42
Schrödinger's cat
 
Pixel42's Avatar
 
Join Date: May 2004
Location: Malmesbury, UK
Posts: 13,018
It's a soul, isn't it?
__________________
"If you trust in yourself ... and believe in your dreams ... and follow your star ... you'll still get beaten by people who spent their time working hard and learning things" - Terry Pratchett
Pixel42 is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 23rd December 2020, 12:06 AM   #74
The Great Zaganza
Maledictorian
 
The Great Zaganza's Avatar
 
Join Date: Aug 2016
Posts: 14,686
As far as I am concerned, you are all p-Zombies.
As far as you are concerned, I am one.
__________________
So what are you going to do about it, huh?
What would an intellectual do?
What would Plato do?
The Great Zaganza is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 23rd December 2020, 01:35 AM   #75
psionl0
Skeptical about skeptics
 
psionl0's Avatar
 
Join Date: Sep 2010
Location: 31°57'S 115°57'E
Posts: 17,017
Originally Posted by arthwollipot View Post
But what is that?
That is something to be answered in the future - if possible.

All we need to know for now is that we can be reasonably confident that no existing AI has sentience so there are no moral questions to be considered.
__________________
"The process by which banks create money is so simple that the mind is repelled. Where something so important is involved, a deeper mystery seems only decent." - Galbraith, 1975
psionl0 is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 23rd December 2020, 01:46 AM   #76
The Great Zaganza
Maledictorian
 
The Great Zaganza's Avatar
 
Join Date: Aug 2016
Posts: 14,686
Originally Posted by psionl0 View Post
That is something to be answered in the future - if possible.

All we need to know for now is that we can be reasonably confident that no existing AI has sentience so there are no moral questions to be considered.
I wouldn't be so sure.

There is a moral argument to be had about making and using child-like sex dolls.
There are all kinds of moral questions about how people should and should not treat non-sentient entities.
It's not just about the one acted upon, but also about the one doing the acting on.
__________________
So what are you going to do about it, huh?
What would an intellectual do?
What would Plato do?
The Great Zaganza is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 23rd December 2020, 02:37 AM   #77
psionl0
Skeptical about skeptics
 
psionl0's Avatar
 
Join Date: Sep 2010
Location: 31°57'S 115°57'E
Posts: 17,017
Originally Posted by The Great Zaganza View Post
I wouldn't be so sure.

There is a moral argument to be had about making and using child-like sex dolls.
There are all kinds of moral questions about how people should and should not treat non-sentient entities.
It's not just about the one acted upon, but also about the one doing the acting on.
Are you saying that whacking (with or without an aid) is a moral issue?
__________________
"The process by which banks create money is so simple that the mind is repelled. Where something so important is involved, a deeper mystery seems only decent." - Galbraith, 1975
psionl0 is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 23rd December 2020, 02:39 AM   #78
The Great Zaganza
Maledictorian
 
The Great Zaganza's Avatar
 
Join Date: Aug 2016
Posts: 14,686
Originally Posted by psionl0 View Post
Are you saying that whacking (with or without an aid) is a moral issue?
Are you saying that it isn't for oh so many people?
__________________
So what are you going to do about it, huh?
What would an intellectual do?
What would Plato do?
The Great Zaganza is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 23rd December 2020, 03:36 AM   #79
psionl0
Skeptical about skeptics
 
psionl0's Avatar
 
Join Date: Sep 2010
Location: 31°57'S 115°57'E
Posts: 17,017
Originally Posted by The Great Zaganza View Post
Are you saying that it isn't for oh so many people?
It is on religious grounds but I can't imagine that bothering too many people here.
__________________
"The process by which banks create money is so simple that the mind is repelled. Where something so important is involved, a deeper mystery seems only decent." - Galbraith, 1975
psionl0 is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 23rd December 2020, 03:55 AM   #80
Darat
Lackey
Administrator
 
Darat's Avatar
 
Join Date: Aug 2001
Location: South East, UK
Posts: 97,131
Originally Posted by theprestige View Post
Now we're talking!

So. As a society, we seem to take for granted that it's unethical to just keep dosing someone with drugs until they think and feel the way we want them to think and feel. There are some exceptions, of course. But I don't think intentionally breeding a slave race and drugging them into happy compliance is one of those ethical exceptions.

But maybe it does.

Brave New World raises some serious questions. How strongly do we really believe that a mass of drug-docile sheep is too high a price to pay for a peaceful and prosperous society?
Originally Posted by theprestige View Post
Drugs as a personal choice are one thing.

Creating an unhappy slave and then drugging it into compliance is something else entirely.
Many people claim that they “love their job” or are “happy” at their job, self help books often tell people to learn to be happy with their life. If it isn’t a bad thing to believe that how is it a bad thing to make everyone share the same happiness?

Now with humans we know at the moment we can’t do that, most of the drugs we use to alter moods do it via a “high” that interferes with everyday life.

But with a mind we create I would say we have an ethical obligation to ensure it is happy with its existence, happy doing what we want it to do, so we should look to design the mind in such a way that enables us to do that.

It’s the opposite to an Asimov three laws of robotics approach which is the stick technique, ensuring the mind is happy is the carrot approach, which is what we seem to (as a generalisation of an abstract level of society) regard as the favoured approach.
__________________
I wish I knew how to quit you

Last edited by Darat; 23rd December 2020 at 04:05 AM. Reason: A
Darat is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Reply

International Skeptics Forum » General Topics » Religion and Philosophy

Bookmarks

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump


All times are GMT -7. The time now is 03:10 PM.
Powered by vBulletin. Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.

This forum began as part of the James Randi Education Foundation (JREF). However, the forum now exists as
an independent entity with no affiliation with or endorsement by the JREF, including the section in reference to "JREF" topics.

Disclaimer: Messages posted in the Forum are solely the opinion of their authors.