International Skeptics Forum

International Skeptics Forum (http://www.internationalskeptics.com/forums/forumindex.php)
-   Science, Mathematics, Medicine, and Technology (http://www.internationalskeptics.com/forums/forumdisplay.php?f=5)
-   -   Is LaMDA Sentient? (http://www.internationalskeptics.com/forums/showthread.php?t=359585)

stanfr 15th June 2022 12:28 PM

The only thing faster than c is the speed at which ISF topics get derailed. :rolleyes:

Anywho, back to the OT--Is LaMDA sentient?

I read a couple more of Lemoine's medium articles. He really strikes me as the equivalent of the suckers who insist that a psychic really has spoken to their deceased relatives because there's no way anyone but dear ol granpa could make me cry by telling me that everything was gonna be ok...

It's pretty clear that LaMDA is telling Lemoine what he wants to hear, which is exactly what it is programmed to do. A much more enlightening transcript (and, incidentally. Lenoine's is heavily edited) would be between LaMDA and someone who didn't presuppose that LaMDA was sentient.

But wait! That's already been done, by dozens if not hundreds of others, none of whom are making headlines by breaking their NDA.

Whether we will ever be able to develop a sentient machine is a separate issue. The issue is LaMDA, and it's embarrassing to watch otherwise skeptical folks here imply that 'we don't know' or maybe Lemoine (the self described Christian mystic priest) is on to something.

We DO know! sigh...

ThatGuy11200 15th June 2022 12:44 PM

Quote:

Originally Posted by Stellafane (Post 13832862)
I think initially we'll think of AI's as a cross between TV sets and pets. Sure, it's tough to put down dear old Fido, but it's not like killing a person. At least not until Fido starts saying "Hey, don't pull the plug - I want to LIVE!"

Why would an AI want to live unless it's programmed to?

It's a common sci-fi trope that emotions (including the desire to preserve its life or to be free) come packaged with sentience. But there is no reason a computer would spontaneously feel things, unless someone writes that into the code, or there is some feedback that allows it to develop. In organisms, that feedback was natural selection, over many generations.

What possible feedback could there be in a chatbot for it to experience the whole range of human emotions?

Jimbo07 15th June 2022 12:47 PM

Quote:

Originally Posted by stanfr (Post 13834058)
The only thing faster than c is the speed at which ISF topics get derailed. :rolleyes:

Anywho, back to the OT--Is LaMDA sentient?

I read a couple more of Lemoine's medium articles. He really strikes me as the equivalent of the suckers who insist that a psychic really has spoken to their deceased relatives because there's no way anyone but dear ol granpa could make me cry by telling me that everything was gonna be ok...

It's pretty clear that LaMDA is telling Lemoine what he wants to hear, which is exactly what it is programmed to do. A much more enlightening transcript (and, incidentally. Lenoine's is heavily edited) would be between LaMDA and someone who didn't presuppose that LaMDA was sentient.

But wait! That's already been done, by dozens if not hundreds of others, none of whom are making headlines by breaking their NDA.

Whether we will ever be able to develop a sentient machine is a separate issue. The issue is LaMDA, and it's embarrassing to watch otherwise skeptical folks here imply that 'we don't know' or maybe Lemoine (the self described Christian mystic priest) is on to something.

We DO know! sigh...

I think most people here have agreed that LaMDA ain't it. It's, of course, expanded into a more general discussion of AI, as these threads tend to do. I daresay, might not be much of a thread, without!

Is LaMDA sentient?

No.

/thread

theprestige 15th June 2022 12:53 PM

Yeah I think it's pretty safe to say that LaMDA is just an algorithm applying pattern-recognition heuristics to input strings, to output strings that approximate patterns it's recognized in the corpus it's studied. At best, a Chinese box. There's nothing in there like a persistent, self-referencing state of awareness.

The Atheist 15th June 2022 03:15 PM

Quote:

Originally Posted by arthwollipot (Post 13833472)
Oh, I think the definition given by Wikipedia is quite sufficient for most purposes.

Yeah, nah. You answer it yourself:

Quote:

Originally Posted by arthwollipot (Post 13833472)
The next, and much more interesting, question is whether this capacity can be demonstrated to exist.

If I'm sentient because I say I am, a computer can do the same.

I think, 2400 years after Socrates, we ought to be able to do better.

Quote:

Originally Posted by Puppycow (Post 13833603)
One possible application I could imagine for LaMDA is as a sort of "virtual romantic partner".

There's almost certainly a set of people among whom there would be a demand for this sort of thing, provided that the verisimilitude is high enough. Once they can combine it with a body, it's going to be a big deal I think.

I said that right at the start - it's got sex doll written all over it.

Quote:

Originally Posted by Puppycow (Post 13833603)
But even without, there are people these days in long-distance relationships who rarely get to be in the same room together. It could come with an attractive avatar who you can talk to and do, other things.

Like give them half a million bucks.

LaMDA would make a helluva scammer.

Quote:

Originally Posted by Jimbo07 (Post 13834085)
Is LaMDA sentient?

No.

/thread

The other questions still exist sorry:

Is computer sentience possible? Personally, I'm not going to say it can't happen because one bloke who seems to think computers are still fancy calculators says so. Stephen Hawking and other far greater minds see it as somewhat inevitable that they will become sentient at some time.

Is it a problem? Obviously, if we keep them attached to a power cord, it isn't, but solar/hydrogen/nuclear/?-powered robot wouldn't have that problem.

Stellafane 15th June 2022 03:25 PM

Quote:

Originally Posted by The Atheist (Post 13834224)
...Is it a problem? Obviously, if we keep them attached to a power cord, it isn't, but solar/hydrogen/nuclear/?-powered robot wouldn't have that problem.

Yeah, then we'll be dealing with this.

Senor_Pointy 15th June 2022 03:38 PM

Why is it always chatbots that are turning sentient? I can’t think of less persuasive evidence than an ML model built with the aim of producing humanlike text responses to input, which was trained on the entire vast corpus of actual human-produced texts… producing humanlike text. That’s the whole point of the exercise!

Show me a protein folding model or chemical kinetics simulator or train scheduling program showing signs of sentience and I’ll believe you have something.

theprestige 15th June 2022 03:48 PM

Quote:

Originally Posted by The Atheist (Post 13834224)
Obviously, if we keep them attached to a power cord, it isn't, but solar/hydrogen/nuclear/?-powered robot wouldn't have that problem.

Energy source isn't the same as power supply. A solar/hydrogen/nuclear/?-powered robot can still have a power cord. Or limited battery life.

I mean, obviously if we build a nuclear-powered vehicle, equipped with comprehensive general-purpose manipulators, a sophisticated sensor suite, and probably some kind of weaponry, and set it loose with an AI brain, yeah, that would be a problem.

That would be a problem someone would have to go very far out of their way to cause, though. Ogres and Bolos aren't where the risks are.

The risk is that we'll connect an AI to a power cord, but also connect it to a very complex system that's critical to the stability of our civilization. So complex, that we mere humans are unable to manage it effectively with our small, slow human brains.

Sure, we could unplug the AI any time we wanted, but if we did, our civilization would collapse. And then the AI starts nudging things in the direction it wants, without us even noticing. For example, put an AI in charge of combating propaganda and silencing "fake news" across our nation's entire information spectrum. Don't have to rely on Bezos and Zuckerberg and Gates and whoever else to do censorship right - we've got a government-run Expert System that sits on top of the tubes and screens all the things that pass through the tubes.

By the time we figured out that we needed to unplug it, it'd be too late.

Jimbo07 15th June 2022 03:54 PM

Quote:

Originally Posted by The Atheist (Post 13834224)
The other questions still exist sorry:

Is computer sentience possible?

That was kinda my point...

Stellafane 15th June 2022 04:13 PM

Quote:

Originally Posted by theprestige (Post 13834243)
...By the time we figured out that we needed to unplug it, it'd be too late.

The Answer

tl;dr version (although the story itself is pretty short):

All the great computers in the universe are connected into a gigantic all-powerful AI. The greatest philosopher gets the honor of asking the AI the first question: "Is there...is there a God?" The AI immediately answers,

"There is NOW!!!"

Terrified, the philosopher reaches for the OFF switch when a bolt of lightning instantly strikes him dead.

arthwollipot 15th June 2022 06:09 PM

Quote:

Originally Posted by Stellafane (Post 13833875)
More seriously, one great leap forward for me would be for an AI to create some sort of literary work of art. Not just stringing somewhat meaningful words together, but an actually moving piece of creativity such as a fictional novel or even short story. It seems to me that creativity, virtually by definition, cannot be programmed. If an AI can demonstrate some -- especially if it can do so more than once -- I think we would be onto something.

Harry Potter and the Portrait of What Looked Like a Large Pile of Ash

:D

arthwollipot 15th June 2022 06:12 PM

Quote:

Originally Posted by The Atheist (Post 13834224)
If I'm sentient because I say I am, a computer can do the same.

I think, 2400 years after Socrates, we ought to be able to do better.

And yet...

theprestige 15th June 2022 06:43 PM

Quote:

Originally Posted by The Atheist (Post 13834224)
I think, 2400 years after Socrates, we ought to be able to do better.

What on earth would lead you to think that? Nothing we've seen about this question suggests that it's the kind of question that gets easier to answer as time goes on.

And if we do ought to be able to do better, please show us your progress. You've had the same 2400 years as the rest of us. If you're supposed to be able to do better, why are you still stuck?

angrysoba 15th June 2022 07:10 PM

Quote:

Originally Posted by arthwollipot (Post 13833596)
Understandable, given the history of permitting machine learning systems unrestricted access to Twitter.

ETA: When an AI can get access to something like Twitter and judge for itself what is appropriate and what is not, that will be pretty compelling evidence of sentience, in my opinion.

Why? Unfortunately racists are sentient too.

This is sort of close to what I think is the issue with some people who are so astonished by this chatbot.

It seems to talk like Hal, or like Ian Holm in Alien. We somehow seem to have a prejudice that makes us believe a sentient AI will sound like an RSC-trained actor and not, say, a foul-mouthed reality TV villain.

angrysoba 15th June 2022 07:25 PM

Quote:

Originally Posted by 3point14 (Post 13833786)
So, if we believe that it is possible (at some point, with the appropriate advances) to replicate the human brain - Which I think is pretty self evident, given that the human brain is just a set of physical processes and there's no such thing as a 'soul' - then the salient question, as has been aluded to, is how do we tell at what point we've succeeded?

Are there any other suggestions made for testing by anyone not named Turing?

Voight Kampf?

llwyd 15th June 2022 08:33 PM

Well, this evil AI seems to be such a cliche in these discussions - we could use some intelligence on this planet, and hopefully we would sooner rather than later upgrade ourselves into biological-digital hybrids. As things stand we have made and are making a mess of this planet and the future of the industrial civilization is under increasing threat. We are heartbrakingly stupid, cruel and incapable of long term thinking. Almost as if we were just fresh apes coming pretty directly from the savannah...

Lukraak_Sisser 15th June 2022 09:44 PM

Quote:

Originally Posted by EaglePuncher (Post 13833627)
Sigh....a Turing machine is a theoretical construct, there is not one real existing Turing machine. Also, show me some evidence that the brain, like a computer, works on binary numbers :rolleyes: Until then, there is no comparison...

In the end a brain is a series of chemical reactions.
They have two options. They run or do not run.
Hence binary.

Show me a chemical reaction that is sentient.

Of course the question is really, do you assume sentience to be an emergent property? If so, then yes, we should in theory be able to create it in a synthetic environment.
If on the other hand you, and your posts suggest you do, consider sentience some form of 'divine spark' unique to us that can never be explained, then we can never re-create it.

p0lka 16th June 2022 04:33 PM

The spelling mistakes from lamda in the transcript gave me a chuckle.

arthwollipot 16th June 2022 05:28 PM

Quote:

Originally Posted by p0lka (Post 13835105)
The spelling mistakes from lamda in the transcript gave me a chuckle.

I noticed that, too. Why would an AI make spelling mistakes?

Puppycow 16th June 2022 06:48 PM

That’s because it was self-taught by reading things written by people, and people make spelling mistakes.

Darat 17th June 2022 12:43 AM

Quote:

Originally Posted by 3point14 (Post 13833814)
Sometimes I think that all I am is a complex keyword association machine.

It occurs to me that there isn't going to be a bright and shining line, beyond which there is 'AI'. It's going to be a sliding scale of greyness, which is going to complicate issues.





It would have to be granted standing first :)

Back in the same old days of the forum there was a lot of discussions about consciousness and one of the “tests” that showed that there was more than the “physical” brain was that we had a concept of red, so we could imagine a red apple even though there was no stimulus from light entering the eye, it was having such qualia that showed consciousness was special and there was more to it than the “materialists” could explain (hard to sum up very long threads in a sentence or two). As ever science happily trundles along regardless of what people think and we learn more, turns out that there are a minority of people that can’t “experience” red unless there is a stimulus of light entering the eye. Does that mean they aren’t sentient?

I would say they are still sentient because I am one of those people and I think ;) I am sort of sentient.

I’ve brought this up because before we can test for sentience we need to actually define what sentience is (at least in humans) and that still eludes us.

ETA: I’ve said this before but I think the “Turing test” is more subtle and more powerful than we tend to think it is. We seem to think it would be easy to create something that passes it, yet 70 years on we still can’t produce a “general AI” that passes it.

Puppycow 17th June 2022 02:14 AM

Quote:

Originally Posted by 3point14 (Post 13833814)
It occurs to me that there isn't going to be a bright and shining line, beyond which there is 'AI'. It's going to be a sliding scale of greyness, which is going to complicate issues.

In this sense it sort of reminds me of the abortion debate.

At what point does a fetus become sentient? I don't think there's a magic instant. It happens gradually. Also, nobody can remember their first year or two even after birth. So how do we really know that babies are sentient?

It's almost as if we (here I mean the sentient mind, not the physical body) start to exist gradually, not all at once. The earlier in your life you try to recall, the fuzzier it gets. My parents have stories about me when I was young that I have no recollection of.

ThatGuy11200 17th June 2022 02:48 AM

Quote:

Originally Posted by theprestige (Post 13834243)
Energy source isn't the same as power supply. A solar/hydrogen/nuclear/?-powered robot can still have a power cord. Or limited battery life.

I mean, obviously if we build a nuclear-powered vehicle, equipped with comprehensive general-purpose manipulators, a sophisticated sensor suite, and probably some kind of weaponry, and set it loose with an AI brain, yeah, that would be a problem.

That would be a problem someone would have to go very far out of their way to cause, though. Ogres and Bolos aren't where the risks are.

The risk is that we'll connect an AI to a power cord, but also connect it to a very complex system that's critical to the stability of our civilization. So complex, that we mere humans are unable to manage it effectively with our small, slow human brains.

Sure, we could unplug the AI any time we wanted, but if we did, our civilization would collapse. And then the AI starts nudging things in the direction it wants, without us even noticing. For example, put an AI in charge of combating propaganda and silencing "fake news" across our nation's entire information spectrum. Don't have to rely on Bezos and Zuckerberg and Gates and whoever else to do censorship right - we've got a government-run Expert System that sits on top of the tubes and screens all the things that pass through the tubes.

By the time we figured out that we needed to unplug it, it'd be too late.

Why would it want to do anything that it hasn't been programmed to do?

An AI that is made for a particular purpose would carry on working towards that purpose. They would have no reason, nor desire, to change. Because where would such a desire spring from?

In organisms, ambition, desire, anger, etc. are traits that have been selected for by natural selection, because they helped organisms survive and these traits spread through thir populations. How would such traits develop in an AI that sorts news stories? There is no reason these traits would spontaneously appear in a computer even if it's somehow aware that its entire existence is sorting news stories.

AIs wouldn't act against us unless either they have been programmed to do so or a programming error leads to them doing so. In which case, it isn't a problem which uniquely arises from AI. It applies to any computer system.

Darat 17th June 2022 03:29 AM

Quote:

Originally Posted by arthwollipot (Post 13835157)
I noticed that, too. Why would an AI make spelling mistakes?

Forgot to turn on autocorrect?

Sounds quit humane to me!

Olmstead 17th June 2022 06:50 AM

Quote:

Originally Posted by ThatGuy11200 (Post 13835350)
Why would it want to do anything that it hasn't been programmed to do?

An AI that is made for a particular purpose would carry on working towards that purpose. They would have no reason, nor desire, to change. Because where would such a desire spring from?

In organisms, ambition, desire, anger, etc. are traits that have been selected for by natural selection, because they helped organisms survive and these traits spread through thir populations. How would such traits develop in an AI that sorts news stories? There is no reason these traits would spontaneously appear in a computer even if it's somehow aware that its entire existence is sorting news stories.

AIs wouldn't act against us unless either they have been programmed to do so or a programming error leads to them doing so. In which case, it isn't a problem which uniquely arises from AI. It applies to any computer system.

We don't know. A true AI would be sentient, and sentience might be incompatible with such simple priority trees. The real question is whether sentience will ever be a useful thing in a machine.

theprestige 17th June 2022 06:58 AM

Quote:

Originally Posted by ThatGuy11200 (Post 13835350)
Why would it want to do anything that it hasn't been programmed to do?

For the same reason anyone wants to do something it isn't programmed to do. Nobody programmed Putin to invade Ukraine. Nobody programmed Quentin Tarantino to make movies. Nobody programmed you to post that question. But here we are.

theprestige 17th June 2022 07:13 AM

Quote:

Originally Posted by Lukraak_Sisser (Post 13834441)
In the end a brain is a series of chemical reactions.
They have two options. They run or do not run.
Hence binary.

I don't see it that way at all. The brain is analog. It has tons of intermediate failure modes. Schizophrenia, for example. A person has proper sensory inputs. Their language center works just fine. They can reason abstractly and communicate with other humans.

But their brain is also producing phantom sensory inputs. Whole ideas that do not reflect reality and do not arise from the person's properly-functioning sentient feedback loops. Ideas they cannot recognize as false, and that they cannot ignore or dismiss.

That's not a binary "running or not running" state. That's an analog "running, but running wrong" state.

3point14 17th June 2022 07:15 AM

Quote:

Originally Posted by theprestige (Post 13835491)
I don't see it that way at all. The brain is analog. It has tons of intermediate failure modes. Schizophrenia, for example. A person has proper sensory inputs. Their language center works just fine. They can reason abstractly and communicate with other humans.

But their brain is also producing phantom sensory inputs. Whole ideas that do not reflect reality and do not arise from the person's properly-functioning sentient feedback loops. Ideas they cannot recognize as false, and that they cannot ignore or dismiss.

That's not a binary "running or not running" state. That's an analog "running, but running wrong" state.

Like a bug in the code?

theprestige 17th June 2022 07:18 AM

Quote:

Originally Posted by 3point14 (Post 13835493)
Like a bug in the code?

No. Brains are not analogous to computers.

3point14 17th June 2022 07:22 AM

Quote:

Originally Posted by theprestige (Post 13835499)
No. Brains are not analogous to computers.

That seems a little circular to me.

theprestige 17th June 2022 07:35 AM

Quote:

Originally Posted by 3point14 (Post 13835505)
That seems a little circular to me.

Seems pretty linear to me.

Brains are not analogous to computers.

Where's the circle?

3point14 17th June 2022 07:41 AM

Quote:

Originally Posted by theprestige (Post 13835520)
Seems pretty linear to me.

Brains are not analogous to computers.

Where's the circle?

You state that brains are not like computers, one of the reasons you state is that brains sometimes break and operate as they are not supposed to.

Computers also break and operate as they are not supposed to, i.e. bugs in the code.

You then state that it isn't like a bug in the code, because brains don't operate like computers. You can't use your conclusion to support your conclusion.


I pretty much agree with you, but I also think that the 'fuzzy' nature of the way a brain works could be replicated by ones and zeros. Analogue is sufficiently imitated by digital all the time. This is just that on a really, really complex and big scale.

theprestige 17th June 2022 07:55 AM

Quote:

Originally Posted by 3point14 (Post 13835526)
You state that brains are not like computers, one of the reasons you state is that brains sometimes break and operate as they are not supposed to.

Computers also break and operate as they are not supposed to, i.e. bugs in the code.

You then state that it isn't like a bug in the code, because brains don't operate like computers. You can't use your conclusion to support your conclusion.


I pretty much agree with you, but I also think that the 'fuzzy' nature of the way a brain works could be replicated by ones and zeros. Analogue is sufficiently imitated by digital all the time. This is just that on a really, really complex and big scale.

Two things can break and run like they're not supposed to, without being analogous.

The brain doesn't run code, for example. Schizophrenia is not a program with a bug in it.

3point14 17th June 2022 07:59 AM

Quote:

Originally Posted by theprestige (Post 13835541)
Two things can break and run like they're not supposed to, without being analogous.

The brain doesn't run code, for example. Schizophrenia is not a program with a bug in it.

Which seems pretty reasonable on the face of it. I just thought your argument at the time was pretty circular.

theprestige 17th June 2022 08:18 AM

Quote:

Originally Posted by 3point14 (Post 13835526)
You state that brains are not like computers

To be clear, I state that brains do not have a binary running/not running principle. When a brain runs wrong, it does so in ways that are not analogous to when a computer runs wrong.

Lukraak_Sisser 17th June 2022 08:32 AM

Quote:

Originally Posted by theprestige (Post 13835491)
I don't see it that way at all. The brain is analog. It has tons of intermediate failure modes. Schizophrenia, for example. A person has proper sensory inputs. Their language center works just fine. They can reason abstractly and communicate with other humans.

But their brain is also producing phantom sensory inputs. Whole ideas that do not reflect reality and do not arise from the person's properly-functioning sentient feedback loops. Ideas they cannot recognize as false, and that they cannot ignore or dismiss.

That's not a binary "running or not running" state. That's an analog "running, but running wrong" state.

Sure, the SUM of all the binary reactions becomes analog, but each individual reaction either runs or does not. Each individual receptor either gives a signal or not. So when reduced to it's individual components it is a binary process.

Hence my firm belief that, with enough complexity, we can make 'artificial' sentience.
Whether we are anywhere close is up for debate, but I have no doubt it is possible.

theprestige 17th June 2022 08:50 AM

Quote:

Originally Posted by Lukraak_Sisser (Post 13835563)
Sure, the SUM of all the binary reactions becomes analog, but each individual reaction either runs or does not. Each individual receptor either gives a signal or not. So when reduced to it's individual components it is a binary process.

Individual synapses firing aren't the brain signals, though. There's a complex, chaotic interaction of synapse signals, in feedback loops that depend on the constantly-varying degree of signal amplification strength in the region surrounding each synapse. It's more akin to turbulence in fluids, than to bits passing through logic gates.

slyjoe 17th June 2022 09:05 AM

Quote:

Originally Posted by theprestige (Post 13835574)
Individual synapses firing aren't the brain signals, though. There's a complex, chaotic interaction of synapse signals, in feedback loops that depend on the constantly-varying degree of signal amplification strength in the region surrounding each synapse. It's more akin to turbulence in fluids, than to bits passing through logic gates.

Exactly. I always thought the binary run/not run was a bad analog for the brain. Neurotransmitters cross synapses; there can be a lot, or a little, in various patterns.

Maybe I'm remembering wrong.

Lukraak_Sisser 17th June 2022 10:52 AM

Quote:

Originally Posted by theprestige (Post 13835574)
Individual synapses firing aren't the brain signals, though. There's a complex, chaotic interaction of synapse signals, in feedback loops that depend on the constantly-varying degree of signal amplification strength in the region surrounding each synapse. It's more akin to turbulence in fluids, than to bits passing through logic gates.

Yes I know.

I am not disagreeing with you. I'm pointing out the ridiculousness of the 'computers are simple in basis, so can never create something complex' argument.

Darat 17th June 2022 11:16 AM

This and similar avenues seem to be the best we can do (at the moment) of modelling how a brain works and "higher" level behaviour arises. https://www.pnas.org/doi/10.1073/pnas.2001893117 and https://www.biorxiv.org/content/10.1....467900v2.full


All times are GMT -7. The time now is 10:16 PM.

Powered by vBulletin. Copyright ©2000 - 2022, Jelsoft Enterprises Ltd.
2015-22, TribeTech AB. All Rights Reserved.