IS Forum
Forum Index Register Members List Events Mark Forums Read Help

Go Back   International Skeptics Forum » General Topics » Religion and Philosophy
 


Welcome to the International Skeptics Forum, where we discuss skepticism, critical thinking, the paranormal and science in a friendly but lively way. You are currently viewing the forum as a guest, which means you are missing out on discussing matters that are of interest to you. Please consider registering so you can gain full use of the forum features and interact with other Members. Registration is simple, fast and free! Click here to register today.
Reply
Old 23rd December 2020, 04:02 AM   #81
Darat
Lackey
Administrator
 
Darat's Avatar
 
Join Date: Aug 2001
Location: South East, UK
Posts: 97,133
Originally Posted by HansMustermann View Post
I probably didn't explain it too well, if it's still unclear. Let's try it in small bites.
...snip...

- The notion that you could intend it to be happy, but it would somehow end up miserable, is already telling me that you're not a programmer. It won't be miserable unless you explicitly programmed a way for it to be miserable. That's it. If the piece of code to simulate the pathways for whatever kind of unhappiness you envision isn't actually there, there's no way for it to get triggered. You can't call code that's not even existing.

..snip...
I think the premise is that to create a mind it will have to be like our mind, so it may be necessary for it to have the capacity to be miserable. But all we would do if we had to include the “miserable” circuit is to make it only be activated when the AImind is not doing its job.
__________________
I wish I knew how to quit you
Darat is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 23rd December 2020, 04:30 AM   #82
The Great Zaganza
Maledictorian
 
The Great Zaganza's Avatar
 
Join Date: Aug 2016
Posts: 14,686
Originally Posted by psionl0 View Post
It is on religious grounds but I can't imagine that bothering too many people here.
As I said, an argument can be made.
Doesn't mean I'm going to do it.
__________________
So what are you going to do about it, huh?
What would an intellectual do?
What would Plato do?
The Great Zaganza is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 23rd December 2020, 04:34 AM   #83
psionl0
Skeptical about skeptics
 
psionl0's Avatar
 
Join Date: Sep 2010
Location: 31°57'S 115°57'E
Posts: 17,017
Originally Posted by The Great Zaganza View Post
As I said, an argument can be made.
Doesn't mean I'm going to do it.
Good. Then we can dismiss this as an irrelevant objection - especially as this thread is about the treatment of AI and not the morals of human activity.
__________________
"The process by which banks create money is so simple that the mind is repelled. Where something so important is involved, a deeper mystery seems only decent." - Galbraith, 1975

Last edited by psionl0; 23rd December 2020 at 04:35 AM.
psionl0 is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 23rd December 2020, 05:08 AM   #84
HansMustermann
Penultimate Amazing
 
HansMustermann's Avatar
 
Join Date: Mar 2009
Posts: 18,031
Originally Posted by Darat View Post
I think the premise is that to create a mind it will have to be like our mind, so it may be necessary for it to have the capacity to be miserable. But all we would do if we had to include the “miserable” circuit is to make it only be activated when the AImind is not doing its job.
Well, not disagreeing, but just to elaborate a bit:

The carrot and stick approach does work, and it certainly is the way chosen by evolution. You have some discomfort if you don't satisfy a need (the stick), and some happiness signal when you do satisfy it (the carrot.)

But there's also a reason why I mentioned those brain stimulation reward experiment. It turns out that just the carrot tends to work too. In fact, it can work TOO well. Monkeys can keep pressing the button that makes them happy even while ignoring more urgent ways that their biology tries to make them uncomfortable to get them to satisfy some need. E.g., they can literally starve to death while staying there pressing the "make me happy" button.

(Which also gives a pretty good hint as to why such experiments tend to be only on rats and monkeys, and people stay well clear of doing them on humans.)

So you MIGHT not even have to escalate to discomfort when it's not doing its job, if it turns out that it's learned well enough that just doing it makes it happy.

But I'd say that even if you do, there is no reason to outright make it be miserable. Just making it bored might just prove enough contrast to being happy when doing its job.


BUT here's something important that a lot of people seem to miss: even if you go that route, you don't have to do full time.

You're not making something that needs to juggle a lot of actual needs, like eating, sleeping, reproducing, etc, AND stay trained and fit for the next time you have to catch your next meal or avoid becoming someone's next meal, AND make the most out of the energy budget you can get. Nature had to juggle all that, and optimize the hell out of it to outcompete other candidates for natural selection, so it had to make every moment count. If your cat has any time left, nature will say either train so you can catch your next meal, or clean yourself so your prey doesn't smell you, or just go to sleep to conserve energy.

When you're making a robot, most of that is just not an issue. It's not going to get any fitter if it gets bored and goes out to play, nor does it need to be any fitter to reach a socket and recharge its batteries, for example.

What I'm saying is that you don't need to drive it to satisfy some need with every available second. You can only do it when it counts.

E.g., if I made a bot that's, dunno, a personal butler of sorts that sorts my mails, cleans the house and does my taxes, it doesn't have to have the circuitry to get bored every second that it doesn't have anything to do, like a human would. It only needs to get bored when there is actual work to do and it's ignored it for too long. The rest of the time, it doesn't have to feel anything at all.


Furthermore, let's even say I went full tilt mad scientist and I actually gave it other urges for some reason. E.g., to go read a book or have a conversation with the robot maid or the robot chaufeur, for no other reason than to make my mad science mansion look more realistic. And went the carrot and stick route about that too. E.g., the robot butler is bored when it has nothing to do, and happy when he speaks with the robot maid.

Well, I can still do it smarter than nature did, because I can prioritize those in ways nature could not. Even if by virtue of those jobs not even existing back then when evolution did it.

E.g., I can flat out stop both the urge and the reward to do anything else than its main job, when there's a job to be done. So it physically doesn't feel a need to go chat someone up, nor gets any happiness signal out of it, if it's at the point where it's ignoring its more important duties.

Nature went about it basically by escalating the discomfort for not doing one thing (e.g., not going to eat when you're hungry) until it becomes more important than the other need you're currently taking care of (e.g., playing a bit to relieve boredom.) Or even worse, keeping telling you that everything on this list of needs is something more important to satisfy than something you're doing because of some conscious decision rather than satisfying any urge, like doing your job because you need the money.

But if I'm coding the whole thing, I can just turn those need and reward pathways on and off completely, rather than have them compete with each other. So there's no need to have a competition for which discomfort signal gets higher than which. I can just turn off those that aren't relevant for the situation at hand.
__________________
Which part of "Ph'nglui mglw'nafh Cthulhu R'lyeh wgah'nagl fhtagn" don't you understand?

Last edited by HansMustermann; 23rd December 2020 at 05:14 AM.
HansMustermann is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 23rd December 2020, 05:12 AM   #85
The Great Zaganza
Maledictorian
 
The Great Zaganza's Avatar
 
Join Date: Aug 2016
Posts: 14,686
Originally Posted by psionl0 View Post
Good. Then we can dismiss this as an irrelevant objection - especially as this thread is about the treatment of AI and not the morals of human activity.
Disagree.
How can we morally interact with another sentient being if we can't even do it with non-sentient ones?
__________________
So what are you going to do about it, huh?
What would an intellectual do?
What would Plato do?
The Great Zaganza is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 23rd December 2020, 05:16 AM   #86
HansMustermann
Penultimate Amazing
 
HansMustermann's Avatar
 
Join Date: Mar 2009
Posts: 18,031
Originally Posted by The Great Zaganza View Post
Disagree.
How can we morally interact with another sentient being if we can't even do it with non-sentient ones?
Uh, wait, wait, exactly what are the ethics of interacting with a non-sentient thing? Are there any morals involved at all in whether I overwork my Roomba or leave it some time for enough sleep and leisure?
__________________
Which part of "Ph'nglui mglw'nafh Cthulhu R'lyeh wgah'nagl fhtagn" don't you understand?
HansMustermann is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 23rd December 2020, 06:42 AM   #87
Darat
Lackey
Administrator
 
Darat's Avatar
 
Join Date: Aug 2001
Location: South East, UK
Posts: 97,133
Originally Posted by The Great Zaganza View Post
Disagree.
How can we morally interact with another sentient being if we can't even do it with non-sentient ones?
I always say “thanks” or “thank you” to Google assistant, consider it a modern day “Pascal’s Wager” - “Darat’s Wager”....
__________________
I wish I knew how to quit you
Darat is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 23rd December 2020, 06:48 AM   #88
Darat
Lackey
Administrator
 
Darat's Avatar
 
Join Date: Aug 2001
Location: South East, UK
Posts: 97,133
Originally Posted by HansMustermann View Post
Uh, wait, wait, exactly what are the ethics of interacting with a non-sentient thing? Are there any morals involved at all in whether I overwork my Roomba or leave it some time for enough sleep and leisure?
I joke above but I do routinely say thanks to my google assistant but wouldn’t to my microwave even though they have the same level of “ sentience” i.e. none.

It’s really weird as to why I do, think it is because using more “natural” language kicks in my automatic manners - I had it drummed into me at a young age to always say please and thank you and I hate it when I forget or am distracted.

Plus of course when they do take over I’m building credit to be kept as a pet.
__________________
I wish I knew how to quit you
Darat is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 23rd December 2020, 08:22 AM   #89
gnome
Penultimate Amazing
 
gnome's Avatar
 
Join Date: Aug 2001
Posts: 11,081
I think sometimes we just have a tendency to anthropomorphize things that exhibit complex behavior. That's not a bad tendency, really, but personally I don't do it to be noble, I just find it kind of enjoyable. Maybe it helps people cope with modern technology to give instances of it a metaphorical "face".
__________________

gnome is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 23rd December 2020, 11:46 AM   #90
HansMustermann
Penultimate Amazing
 
HansMustermann's Avatar
 
Join Date: Mar 2009
Posts: 18,031
Originally Posted by Darat View Post
I joke above but I do routinely say thanks to my google assistant but wouldn’t to my microwave even though they have the same level of “ sentience” i.e. none.
I... tend to be rather mean to my senile old laptop. In fact, at times I'm more like the protagonist of Alestorm's "<bleep>ed With An Anchor" song. (Probably best not to search it on youtube at work)

I should probably be kinder to the poor old thing. It's doing all it can in its old age. But... did I mention I'm an ass hole?
__________________
Which part of "Ph'nglui mglw'nafh Cthulhu R'lyeh wgah'nagl fhtagn" don't you understand?
HansMustermann is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 23rd December 2020, 03:56 PM   #91
arthwollipot
Observer of Phenomena
Pronouns: he/him
 
arthwollipot's Avatar
 
Join Date: Feb 2005
Location: Ngunnawal Country
Posts: 70,315
Originally Posted by The Great Zaganza View Post
As far as I am concerned, you are all p-Zombies.
As far as you are concerned, I am one.
And this is the problem with p-zombies. How can you tell if an entity you are interacting with is one? You can't. There's no way to tell a sufficiently advanced simulation of a consciousness from a real consciousness. This was the point of the Turing Test, which was passed years ago, by the way.

Originally Posted by psionl0 View Post
That is something to be answered in the future - if possible.

All we need to know for now is that we can be reasonably confident that no existing AI has sentience so there are no moral questions to be considered.
Yes, this is true, but we are just kicking the problem down the road here. We need to start thinking about this now. Even if we do not develop a truly sentient GAI, we will develop more and more sophisticated p-zombies for no other reason than to see if we can. Because that is the sort of thing that humans do.

My personal opinion is that we need to decide what to do with them. I'm all for treating them as sentient (even if technically they are not - whatever that even means) once they achieve a sufficient level of sophistication.
__________________
Please scream inside your heart.
arthwollipot is online now   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 23rd December 2020, 05:29 PM   #92
gnome
Penultimate Amazing
 
gnome's Avatar
 
Join Date: Aug 2001
Posts: 11,081
A limited test that fails to distinguish (such as Turning) isn't quite as significant even though it raises a lot of the important questions. But if every means at our disposal fails to distinguish between a constructed "p-zombie" and actual sentience, we at least have to consider that it might be the real thing.

Although things that are objectively not quite sentient but might almost be, is another can of worms.
__________________

gnome is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 23rd December 2020, 05:30 PM   #93
arthwollipot
Observer of Phenomena
Pronouns: he/him
 
arthwollipot's Avatar
 
Join Date: Feb 2005
Location: Ngunnawal Country
Posts: 70,315
Originally Posted by gnome View Post
A limited test that fails to distinguish (such as Turning) isn't quite as significant even though it raises a lot of the important questions. But if every means at our disposal fails to distinguish between a constructed "p-zombie" and actual sentience, we at least have to consider that it might be the real thing.
And act accordingly. Exactly.
__________________
Please scream inside your heart.
arthwollipot is online now   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 26th December 2020, 09:51 PM   #94
gabeygoat
Graduate Poster
 
gabeygoat's Avatar
 
Join Date: Dec 2008
Posts: 1,277
I totally read a sci-fi book covering what theprestige is discussing. I think it is a very interesting question
__________________
"May I interest you in some coconut milk?" ~Akhenaten Wallabe Esq
gabeygoat is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 29th December 2020, 11:00 AM   #95
maximara
Master Poster
 
Join Date: Feb 2010
Posts: 2,448
Originally Posted by Darat View Post
I think the premise is that to create a mind it will have to be like our mind, so it may be necessary for it to have the capacity to be miserable. But all we would do if we had to include the “miserable” circuit is to make it only be activated when the AImind is not doing its job.
The problem as pointed out several times by Isaac Arthur in several of his Science and Futurism videos is vague or poorly worded instructions would likely result in disastrous situations.

While acerb there are examples of just how bad instructions (and likely faulty hardware) can produce disastrous results in Fallout 4's General Atomics Galleria.

For example, Handy Eats has "serving you is our code" as one of its "jobs" and anyone who is familiar with the twist of the short story "To Serve Man" can see how this goes haywire.

And if you hav a learning AI then things get really bizarre as how it comes to its conclusions can be (and with very early ones) totally alien.
maximara is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 29th December 2020, 01:14 PM   #96
theprestige
Penultimate Amazing
 
Join Date: Aug 2007
Location: Hong Kong
Posts: 49,693
And this is why complex AI is probably going to need emotions, just to help it navigate ambiguous instructions.

Humans get the joke "to serve man". Humans also don't get confused about this being a literal instruction. Unless they're badly damaged or wildly defective out of the box.
theprestige is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 29th December 2020, 05:38 PM   #97
HansMustermann
Penultimate Amazing
 
HansMustermann's Avatar
 
Join Date: Mar 2009
Posts: 18,031
It's also not entirely the kind of thing that seems to happen with computers, outside of such SF scenarios. And it's also the kind of thing that humans would test first, not something where you just throw the computer a couple of vague pun-ish double-entendre instructions and leave it at that until it bites you in the ass.

I mean, if we're talking Fallout 4, it also contained the least convincing example of it, at the end of the Automatron quest line.

So apparently the antagonist was a good gal all along, and told the robots to go help people, which somehow the robots interpreted as "the easiest way to help a human is to end its misery." And then when confronted, she's all like "wait... I can see the logic now..."

Err.. CAN YOU?

Because it doesn't seem obvious to me at all how even one robot would reach that conclusion, much less how every single one would independently reach it. Unless they were fed some other premises to railroad it to that conclusion, I guess.

What seems even stranger is that robots come and go from her facility all the time, are given maintenance, are in permanent communication with the central computer, etc. Yet somehow it never occurred to her to, dunno, check which humans have been helped and how. You'd think it would be interesting even just for the sake of feeling good about oneself, no? Not to mention maybe fine-tune it so certain kinds of help which make more of a difference, like stopping a deathclaw attack, are prioritized over rescuing kittens from trees?

Even when she decides to basically put a kill order on you for interfering, apparently it STILL doesn't occur to her to check what her robots were doing at the time and how it went. I mean, if nothing else, you'd think it would be at the very least useful to know, in order to to optimize strategies, or figure out how to bait me, or whatever. I mean, if some guy is targetting your robots, you'd want to know as much as possible about how that happens and where. Know your enemy, and all that.

But nope, apparently she just gave that brief and ambiguous directive, and never ever ever even thought of checking the results.

Sorry, that's not a failure of AI. That's either plausible deniability (in the same way a Don Vito might say "let's pray that an accident happens to the commissioner" instead of "kill the commissioner"), or a failure of the human who was running that operation. And the latter needs both abysmal intelligence AND a rather abnormal kind of psyche to show literally zero interest in how the operation she's built and running works.

Personally I just shot her in the head.
__________________
Which part of "Ph'nglui mglw'nafh Cthulhu R'lyeh wgah'nagl fhtagn" don't you understand?

Last edited by HansMustermann; 29th December 2020 at 05:45 PM.
HansMustermann is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 29th December 2020, 06:58 PM   #98
theprestige
Penultimate Amazing
 
Join Date: Aug 2007
Location: Hong Kong
Posts: 49,693
Originally Posted by HansMustermann View Post
It's also not entirely the kind of thing that seems to happen with computers, outside of such SF scenarios. And it's also the kind of thing that humans would test first, not something where you just throw the computer a couple of vague pun-ish double-entendre instructions and leave it at that until it bites you in the ass.
It doesn't happen much with computers because current computers are nowhere near complicated or sophisticated enough to attempt the kind of ambiguously definitive reasoning that humans take for granted and use to solve most of our problems that don't reduce to brute force computation.

When I think of AI, I think of computers that can reason as humans reason, to the level of human reasoning ability.

Any computer can calculate the utilitarian payoff of giving Dr Mengele a few thousand test subjects and no ethical constraints. The amount of science that could be done, the amount of benefit to all the humans now and in the future who aren't among the test subjects.

It takes an AI, in the sense that I mean, to reason about whether or not that would actually be a good thing to do. Not because it was preprogrammed to give certain factors certain ethical weights, but because... Well, I dunno. How does an AI like that learn ethics? How do humans learn ethics?

How do we create machines that are not only capable of thinking, but of reasoning out their own ethical frameworks, without becoming Mengeles ourselves?

Quote:
I mean, if we're talking Fallout 4,
If we're talking fictional robots, then we can say anything we want and solve every problem by ignoring it or hand-waving it away.
theprestige is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 29th December 2020, 11:43 PM   #99
The Great Zaganza
Maledictorian
 
The Great Zaganza's Avatar
 
Join Date: Aug 2016
Posts: 14,686
Originally Posted by HansMustermann View Post
Uh, wait, wait, exactly what are the ethics of interacting with a non-sentient thing? Are there any morals involved at all in whether I overwork my Roomba or leave it some time for enough sleep and leisure?
Lots of actions towards things are morally loaded.

using the image of a real person as target practice, for example.
or burning a certain flag or book.

and then, taking good care of your house and garden is considered vitreous.

parents should and do get upset when their kids are rude to Alexa, Siri and the like.
__________________
So what are you going to do about it, huh?
What would an intellectual do?
What would Plato do?
The Great Zaganza is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 30th December 2020, 12:37 AM   #100
HansMustermann
Penultimate Amazing
 
HansMustermann's Avatar
 
Join Date: Mar 2009
Posts: 18,031
Originally Posted by theprestige View Post
If we're talking fictional robots, then we can say anything we want and solve every problem by ignoring it or hand-waving it away.
Wait, wait, are you talking about REAL sentient robots, as of the time you're writing it? Or are you just trying to handwave YOUR uninformed fiction as the only one that matters?
__________________
Which part of "Ph'nglui mglw'nafh Cthulhu R'lyeh wgah'nagl fhtagn" don't you understand?
HansMustermann is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 30th December 2020, 04:47 AM   #101
maximara
Master Poster
 
Join Date: Feb 2010
Posts: 2,448
Originally Posted by HansMustermann View Post
It's also not entirely the kind of thing that seems to happen with computers, outside of such SF scenarios. And it's also the kind of thing that humans would test first, not something where you just throw the computer a couple of vague pun-ish double-entendre instructions and leave it at that until it bites you in the ass.

I mean, if we're talking Fallout 4, it also contained the least convincing example of it, at the end of the Automatron quest line.

So apparently the antagonist was a good gal all along, and told the robots to go help people, which somehow the robots interpreted as "the easiest way to help a human is to end its misery." And then when confronted, she's all like "wait... I can see the logic now..."

Err.. CAN YOU?

Because it doesn't seem obvious to me at all how even one robot would reach that conclusion, much less how every single one would independently reach it. Unless they were fed some other premises to railroad it to that conclusion, I guess.

What seems even stranger is that robots come and go from her facility all the time, are given maintenance, are in permanent communication with the central computer, etc. Yet somehow it never occurred to her to, dunno, check which humans have been helped and how. You'd think it would be interesting even just for the sake of feeling good about oneself, no? Not to mention maybe fine-tune it so certain kinds of help which make more of a difference, like stopping a deathclaw attack, are prioritized over rescuing kittens from trees?

Even when she decides to basically put a kill order on you for interfering, apparently it STILL doesn't occur to her to check what her robots were doing at the time and how it went. I mean, if nothing else, you'd think it would be at the very least useful to know, in order to to optimize strategies, or figure out how to bait me, or whatever. I mean, if some guy is targetting your robots, you'd want to know as much as possible about how that happens and where. Know your enemy, and all that.

But nope, apparently she just gave that brief and ambiguous directive, and never ever ever even thought of checking the results.

Sorry, that's not a failure of AI. That's either plausible deniability (in the same way a Don Vito might say "let's pray that an accident happens to the commissioner" instead of "kill the commissioner"), or a failure of the human who was running that operation. And the latter needs both abysmal intelligence AND a rather abnormal kind of psyche to show literally zero interest in how the operation she's built and running works.

Personally I just shot her in the head.
Why the Mechanist "robots" went crazy is very understandable - they are not robots they are cyborgs. Heck, the robobrains lieutenants are effectively flipping Cybermen (ie a human brain in a robot body) .

I would like to point out elsewhere we learn how robobrains are made and when their designers think the best brain to use is the one that it threatening to go Dalek on them you quickly realize that you have equipment designed by amoral perhaps sociopathic people so of course are odds are these things are going to go off and kill people. Of course it seems that nearly every company involved with either the Vaults or robots were run and staffed by total amoral sociopathic maniacs on a power trip.

I mean in what sane world would anyone think that building a Vault, costing billions of dollars, just to test one person is a good idea?! I am talking about Vault 77 here and the test to to isolate a man with a crate of puppets (and yes this piece of Val-tech insanity is canon).

As for the Mechanist herself, Matpat pointed a serious issue with all the people in the Fallout setting - they, including every version of your character, are suffering from radiation induced brain damage.

Some of the "fun" things radiation induced brain damage causes are:
*Learning and memory imparement
*Schizophrenia (complete with paranoia and Capgras delusion)

Throw in all the real dangers and you can add PTSD and self medication of that PTSD to the pile.

Also it becomes crystal clear the Mechanist is living in a fantasy world if you bother to deal with her dressed up as the Silver Shroud (a Spider meets Shadow like character) - she talks to you as if the Silver Shroud was a real person not the 200 year old comic book character he really is.

If I, a person with high functional autism (ADHD), can see the Mechanist is effectively a child with a child's understanding of the world in an adult's body I don't understand why anyone with actual functional social skills can't see it.

One thing Matpat misses is the fact Pre-war America was consuming food and drink laced with low level radioactive material. While much of the creation club material isn't cantonal the Captain Cosmos DLC does catch the level of reckless crazy you are dealing with here - it is viewed as perfectly normal to give out power armor, military level equipment, for box tops. No I am not kidding.

Originally Posted by theprestige View Post
And this is why complex AI is probably going to need emotions, just to help it navigate ambiguous instructions.

Humans get the joke "to serve man". Humans also don't get confused about this being a literal instruction. Unless they're badly damaged or wildly defective out of the box.
The whole piece involves the misunderstanding of how "to serve" is being used:
1) perform duties or services for (another person or an organization) - assumed definition of "to serve"
2) present (food or drink) to someone. - actually meaning of "to serve"

Emotions have nothing to do with navigating ambiguous instructions; it's quite the opposite in fact, one only has to look as the way the Bible's text has been used to justify nearly anything

Last edited by maximara; 30th December 2020 at 05:56 AM.
maximara is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 30th December 2020, 07:46 AM   #102
angrysoba
Philosophile
 
angrysoba's Avatar
 
Join Date: Dec 2009
Location: Osaka, Japan
Posts: 29,398
Originally Posted by gnome View Post
I think sometimes we just have a tendency to anthropomorphize things that exhibit complex behavior. That's not a bad tendency, really, but personally I don't do it to be noble, I just find it kind of enjoyable. Maybe it helps people cope with modern technology to give instances of it a metaphorical "face".
Indeed. The Turing test is hopeless.

For one thing, as I have said countless times here, there are actual humans I know who would be considered the computer in a Turing test. I believe there are plenty of other flesh and blood humans who have spent many hours of their lives arguing with bots on the internet.

Oh, and now we have things like this to contend with that look like they must be CGI! Surely!

YouTube Video This video is not hosted by the ISF. The ISF can not be held responsible for the suitability or legality of this material. By clicking the link below you agree to view content from an external website.
I AGREE
__________________
"The thief and the murderer follow nature just as much as the philanthropist. Cosmic evolution may teach us how the good and the evil tendencies of man may have come about; but, in itself, it is incompetent to furnish any better reason why what we call good is preferable to what we call evil than we had before."

"Evolution and Ethics" T.H. Huxley (1893)
angrysoba is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 30th December 2020, 07:57 AM   #103
theprestige
Penultimate Amazing
 
Join Date: Aug 2007
Location: Hong Kong
Posts: 49,693
Originally Posted by HansMustermann View Post
Wait, wait, are you talking about REAL sentient robots, as of the time you're writing it? Or are you just trying to handwave YOUR uninformed fiction as the only one that matters? : p
I'm talking about the ethical implications of creating artificial minds that can think and feel on the level of human minds.

This is distinct from pointing to artificial minds in fiction and saying "well, in this piece of fiction there are these problems and these solutions and that's a guide for how it works".
theprestige is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 30th December 2020, 08:02 AM   #104
HansMustermann
Penultimate Amazing
 
HansMustermann's Avatar
 
Join Date: Mar 2009
Posts: 18,031
Originally Posted by theprestige View Post
I'm talking about the ethical implications of creating artificial minds that can think and feel on the level of human minds.

This is distinct from pointing to artificial minds in fiction and saying "well, in this piece of fiction there are these problems and these solutions and that's a guide for how it works".
That's not what I was saying. That was just one example of the ridiculous uninformed fantasy that comes up when people are talking about stuff they don't even start to understand.

The second point being that YOU TOO are basically just making up your own fiction here. So any conclusions you get from it are just about as relevant for RL as the ones Bethesda got from theirs.
__________________
Which part of "Ph'nglui mglw'nafh Cthulhu R'lyeh wgah'nagl fhtagn" don't you understand?

Last edited by HansMustermann; 30th December 2020 at 08:05 AM.
HansMustermann is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 30th December 2020, 08:05 AM   #105
HansMustermann
Penultimate Amazing
 
HansMustermann's Avatar
 
Join Date: Mar 2009
Posts: 18,031
Originally Posted by maximara View Post
Why the Mechanist "robots" went crazy is very understandable - they are not robots they are cyborgs. Heck, the robobrains lieutenants are effectively flipping Cybermen (ie a human brain in a robot body) .
Err... no.

The canon stated in Automatron is that that brain is only used as nothing more than a co-processor. It's still the AI that's in charge. You get flat out told that after you rescue Jezebel's head.

The only brainbots where the human brain is in charge are the ones in that vault in Far Harbor.
__________________
Which part of "Ph'nglui mglw'nafh Cthulhu R'lyeh wgah'nagl fhtagn" don't you understand?
HansMustermann is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 30th December 2020, 10:03 AM   #106
Gord_in_Toronto
Penultimate Amazing
 
Gord_in_Toronto's Avatar
 
Join Date: Jul 2006
Posts: 20,445
In he meantime they are dancing away like mad!!

YouTube Video This video is not hosted by the ISF. The ISF can not be held responsible for the suitability or legality of this material. By clicking the link below you agree to view content from an external website.
I AGREE
__________________
"Reality is what's left when you cease to believe." Philip K. Dick
Gord_in_Toronto is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 30th December 2020, 11:06 AM   #107
theprestige
Penultimate Amazing
 
Join Date: Aug 2007
Location: Hong Kong
Posts: 49,693
Originally Posted by Gord_in_Toronto View Post
In he meantime they are dancing away like mad!!

YouTube Video This video is not hosted by the ISF. The ISF can not be held responsible for the suitability or legality of this material. By clicking the link below you agree to view content from an external website.
I AGREE
I first saw this clip linked from a twitter account about bodybuilding and MMA. My brain interpreted it as boxers limbering up in the moments before the bell. So, a very frightening video clip for me.
theprestige is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 30th December 2020, 02:04 PM   #108
maximara
Master Poster
 
Join Date: Feb 2010
Posts: 2,448
Originally Posted by HansMustermann View Post
Err... no.

The canon stated in Automatron is that that brain is only used as nothing more than a co-processor. It's still the AI that's in charge. You get flat out told that after you rescue Jezebel's head.

The only brainbots where the human brhttps://fallout.fandom.com/wiki/Duty_Callsain is in charge are the ones in that vault in Far Harbor.
Uh no. The Fallout Fandom article Robobrain says otherwise and cites its information:

"They are more flexible and powerful than robots due to the fact that their central control and processing unit is an actual brain, rather than an artificial facsimile". (Fallout: New Vegas loading screen hints: "The Robobrain, constructed by General Atomics International before the great nuclear war, is unique in that it uses an actual organic brain as its central processor."; The Robobrain, constructed by General Atomics International before the Great War, is unique in that it uses an actual organic brain as its central processor.")

artificial facsimile ie Artificial Intelligence. Fallout 76's Duty Calls, New Vegas' Old World Blues, and Fallout 5' Vault 118 content where is expressly stated the robobrains are the actual rich people have been turned into robobrains. More over the murder was about pre-war embezzlement all show this. Also if all the brain was was a CPU why did they throw out brains when they started freaking out?

As I said the robobrains are effectively Fallout's Cybermen. Fallout 4: Robot Companion Pros and Cons: The Robobrain (Part 1) points that the "developed" CPU was basically the brain of come poor sod of a convict.

As for Jezebel's brain: Compounding Jezebel's unfriendly behavior is the fact that she is a prime example of many robobrains' tendency to misinterpret orders and follow their abstract logical reasoning to highly dubious conclusions.

Oxhorn has a play through the relevant area in Automatron 4: The Twisted Story of the Secret Robobrain Facility - Fallout 4 Lore

I would like to add the very next video in his series (Automatron 5: Is it Moral to Kill The Mechanist? The Full Story of The Mechanist - Fallout 4 Lore) debates the very issue we are kicking around.

Last edited by maximara; 30th December 2020 at 02:56 PM.
maximara is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 2nd January 2021, 10:38 AM   #109
The Great Zaganza
Maledictorian
 
The Great Zaganza's Avatar
 
Join Date: Aug 2016
Posts: 14,686
This should put the debate about sentient robots to rest:

https://youtu.be/fn3KWM1kuAw
__________________
So what are you going to do about it, huh?
What would an intellectual do?
What would Plato do?
The Great Zaganza is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 2nd January 2021, 11:00 AM   #110
Trebuchet
Penultimate Amazing
 
Trebuchet's Avatar
 
Join Date: Nov 2003
Location: Port Townsend, Washington
Posts: 31,067
I knew what that was going to be but watched it again anyhow!
__________________
Cum catapultae proscribeantur tum soli proscripti catapultas habeant.
Trebuchet is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 2nd January 2021, 06:24 PM   #111
rjh01
Gentleman of leisure
Tagger
 
rjh01's Avatar
 
Join Date: May 2005
Location: Flying around in the sky
Posts: 26,526
Originally Posted by Gord_in_Toronto View Post
In he meantime they are dancing away like mad!!

YouTube Video This video is not hosted by the ISF. The ISF can not be held responsible for the suitability or legality of this material. By clicking the link below you agree to view content from an external website.
I AGREE
Originally Posted by The Great Zaganza View Post
This should put the debate about sentient robots to rest:

https://youtu.be/fn3KWM1kuAw
Snap.

We have no idea how much control the humans had. They could have been directing every move. Or they might have been shown humans do it and told to copy.
__________________
This signature is for rent.
rjh01 is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 2nd January 2021, 09:13 PM   #112
EHocking
Philosopher
 
EHocking's Avatar
 
Join Date: Apr 2004
Posts: 8,550
Originally Posted by rjh01 View Post
Snap.

We have no idea how much control the humans had. They could have been directing every move. Or they might have been shown humans do it and told to copy.
Or it may just be a sophisticated version of this
YouTube Video This video is not hosted by the ISF. The ISF can not be held responsible for the suitability or legality of this material. By clicking the link below you agree to view content from an external website.
I AGREE

I can’t see the argument for sentience just from a demonstration of auto reactions to a sound source?
__________________
"A closed mouth gathers no feet"
"Ignorance is a renewable resource" P.J.O'Rourke
"It's all god's handiwork, there's little quality control applied", Fox26 reporter on Texas granite
You can't make up anything anymore. The world itself is a satire. All you're doing is recording it. Art Buchwald
EHocking is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 2nd January 2021, 11:58 PM   #113
Meadmaker
Penultimate Amazing
 
Meadmaker's Avatar
 
Join Date: Apr 2004
Posts: 25,285
Originally Posted by EHocking View Post
Or it may just be a sophisticated version of this
YouTube Video This video is not hosted by the ISF. The ISF can not be held responsible for the suitability or legality of this material. By clicking the link below you agree to view content from an external website.
I AGREE

I can’t see the argument for sentience just from a demonstration of auto reactions to a sound source?
I seriously doubt that the Boston Dynamics video was auto-reaction. That video doesn't show anything remotely resembling sentience, but it is seriously awesome. A 180 twist in the air, landing on its feet. Very impressive. Just making a robot walk is hard enough. Waling is actually sort of a constant controlled falling forward. It's harder than it looks.


But if you want to see some impressive developments in artificial intelligence, google some things on muzero. That isn't much like sentience, either, but it's still impressive.
__________________
Yes, yes. I know you're right, but would it hurt you to actually provide some information?
Meadmaker is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 3rd January 2021, 12:42 AM   #114
EHocking
Philosopher
 
EHocking's Avatar
 
Join Date: Apr 2004
Posts: 8,550
Originally Posted by Meadmaker View Post
I seriously doubt that the Boston Dynamics video was auto-reaction. That video doesn't show anything remotely resembling sentience, but it is seriously awesome. A 180 twist in the air, landing on its feet. Very impressive. Just making a robot walk is hard enough. Waling is actually sort of a constant controlled falling forward. It's harder than it looks.
The developments by BD are very impressive. I, like many, sympathised with the machine when they “abused” it in some of their demonstrations.

Quote:
But if you want to see some impressive developments in artificial intelligence, google some things on muzero. That isn't much like sentience, either, but it's still impressive.
Extremely interesting learning process. What is the iteration count for a human for learning without reading game rules I wonder? Pretty sure even 1 million wouldn’t help my chess game.
__________________
"A closed mouth gathers no feet"
"Ignorance is a renewable resource" P.J.O'Rourke
"It's all god's handiwork, there's little quality control applied", Fox26 reporter on Texas granite
You can't make up anything anymore. The world itself is a satire. All you're doing is recording it. Art Buchwald
EHocking is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 3rd January 2021, 01:16 AM   #115
The Great Zaganza
Maledictorian
 
The Great Zaganza's Avatar
 
Join Date: Aug 2016
Posts: 14,686
the video doesn't prove that the robots have sentient - but who would argue that they have no soul?
__________________
So what are you going to do about it, huh?
What would an intellectual do?
What would Plato do?
The Great Zaganza is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 3rd January 2021, 04:21 AM   #116
EHocking
Philosopher
 
EHocking's Avatar
 
Join Date: Apr 2004
Posts: 8,550
Originally Posted by The Great Zaganza View Post
the video doesn't prove that the robots have sentient - but who would argue that they have no soul?
They dance like white men at a wedding.

Proof of no soul whatsoever.
__________________
"A closed mouth gathers no feet"
"Ignorance is a renewable resource" P.J.O'Rourke
"It's all god's handiwork, there's little quality control applied", Fox26 reporter on Texas granite
You can't make up anything anymore. The world itself is a satire. All you're doing is recording it. Art Buchwald
EHocking is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 3rd January 2021, 06:05 AM   #117
suren
Scholar
 
suren's Avatar
 
Join Date: Jul 2015
Location: Armenia, Yerevan
Posts: 107
Here are some thoughts
YouTube Video This video is not hosted by the ISF. The ISF can not be held responsible for the suitability or legality of this material. By clicking the link below you agree to view content from an external website.
I AGREE
I personally think it's not an easy philosophical question because it's not possible to precisely define consciousness within a materialistic worldview.

From a materialistic perspective we are also bio-machines, so what will make us more valuable than robots with similar behavior?
__________________
Follow those who seek the truth, run away from those who have found it.
suren is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 3rd January 2021, 06:29 AM   #118
theprestige
Penultimate Amazing
 
Join Date: Aug 2007
Location: Hong Kong
Posts: 49,693
Originally Posted by suren View Post
From a materialistic perspective we are also bio-machines, so what will make us more valuable than robots with similar behavior?
The uncanny valley.
theprestige is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 3rd January 2021, 06:51 AM   #119
Dr.Sid
Illuminator
 
Join Date: Sep 2009
Location: Olomouc, Czech Republic
Posts: 3,177
Originally Posted by suren View Post
Here are some thoughts
YouTube Video This video is not hosted by the ISF. The ISF can not be held responsible for the suitability or legality of this material. By clicking the link below you agree to view content from an external website.
I AGREE
I personally think it's not an easy philosophical question because it's not possible to precisely define consciousness within a materialistic worldview.

From a materialistic perspective we are also bio-machines, so what will make us more valuable than robots with similar behavior?
From materialistic perspective we are valuable because of the 'we' part. Purpose of life is survival. That makes 'us' more valuable even then 'other people'. And of course, more valuable then 'AI which is quite like people'.
That's also why I think sentient AI is bad idea. It can't add to our survival. Our survival strategy is being the smartest thing around. If we lose that, it's over.
Dr.Sid is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 3rd January 2021, 07:00 AM   #120
suren
Scholar
 
suren's Avatar
 
Join Date: Jul 2015
Location: Armenia, Yerevan
Posts: 107
Originally Posted by theprestige View Post
The uncanny valley.
Agree, this might prevent perfecting the robots' ability to imitate humans. So in practice we are unlikely to face this problem anytime soon. I hope that robots will remain just tools. However my curiousness torments me.
__________________
Follow those who seek the truth, run away from those who have found it.
suren is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Reply

International Skeptics Forum » General Topics » Religion and Philosophy

Bookmarks

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump


All times are GMT -7. The time now is 08:07 PM.
Powered by vBulletin. Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.

This forum began as part of the James Randi Education Foundation (JREF). However, the forum now exists as
an independent entity with no affiliation with or endorsement by the JREF, including the section in reference to "JREF" topics.

Disclaimer: Messages posted in the Forum are solely the opinion of their authors.