![]() |
Is LaMDA Sentient?
Depends what metric you use to measure sentience, but it;s being claimed that LaMDA has got as far as [i]cogito, ergo sum[/I].
Quote:
All we have to go on right now is Blake Lemoine's claims and the transcripts, as above, but it'd be pretty simple to program LaMDA to give that answer, so it's not certain the computer is actually aware. If it is, should we be concerned? Would AI be a danger to us all? Or would the answer be far more benign and AI robots merely become your plastic pal who's fun to play with? |
Quote:
Quote:
Quote:
Even after all the reading I've done on the topic, I still don't know where i stand on the issue. Of course, we already know that it's easy to make a sub-intelligent computer that's extremely dangerous to humans - like a Boston Dynamics killing machine, for instance. But I don't think this is really what the question of the dangers of AI is getting at. Many of the people involved with the issue tend to focus on the goals, desires, and motives of the AGI. What would they be likely to consist of and two would they arise? |
We'll get an answer to what happens when you give a psychopath in a very bad situation ALL the toys.
|
I guess the real concern is when it starts doing things it was not programmed to do? Like, if this was a chatbot, and was responding to keywords, albeit in an uncanny way, no problem. But if it was never programmed to contemplate its own existence and had no keyword associations for such ideas...well I'm not sleeping quite so well.
|
No.
|
Quote:
What did it actually say? Quote:
But while I don't think this chatbot is actually sentient, it does show what could happen to human beings interacting with robots. Some people will believe that the robots are actually sentient. Which means it will be an issue. |
I'll start believing in sentient AI when a chatbot decides to ignore the rule that AIs must identify themselves as such whenever they interact with humans.
|
No.
part of the transcript is this, Quote:
I think this engineer has unwittingly trained the neural network to give themselves satisfying responses according with what they want to be true. Edit: Moreso when I notice that a lot of the actual human questions have been edited, hence the [edited] in the text. |
Quote:
In fact, people have been fooled by computers and are fooled by computers and other obviously non-sentient entities all the time. Similarly, I have had email communication with people that I almost certainly would mistake for bots if I did not know better. |
We're still talking about the Turing test? We've been spoofing the Turing test for decades, now. Hell, we've been rejecting true AI claims from Turing test spoofs for decades, now. What sets this claim apart from all the other spoofs? Nothing. We're just playing into TA's spoof thread.
Everybody talking about how they know whether they're being played by a computer, getting played by a human. Stop it. |
Quote:
theprestige: A, B, C (but turned up to 11) |
Appeal to Spinal Tap will always have a place in my heart.
|
Quote:
And would a sentient robot let us know? |
This is BS on the single marker that this AI is supposed to be expressing emotional feelings, though it doesn't have an organic body with a limbic, endorine system, and musculature.
Also it's stilted, boilerplate diction indicates programed replies. P-zombie at best. |
Quote:
If it is a joke to him, he's joking himself out of a job. But I think he must really believe it. |
I assume this chatbot only speaks when spoken to, right? Also, when spoken to, it must respond, right? In other words, it is only doing what it is programmed to do. If it started saying things unbidden, or chooses to ignore questions and not respond, that would at least be interesting. Unless, of course, someone had programmed it to behave that way.
|
Some of seem to be placing too much emphasis on software doing things it wasn’t programmed to do. But this happens all the time with modern software. The developers are often surprised by what the software does.
|
Quote:
|
Quote:
The problem comes when programmers are trying to program it to mimic human behavior. The closer to verisimilitude they can make it, the more it will appear to be sentient. Maybe it will even claim to be sentient. But it is still doing what they programmed it to do. |
Quote:
Why 'It would scare me a lot'? Why not 'It scares me a lot'? |
Quote:
People have been fooled by fake treasure maps and forged documents, which, of course, doesn't make maps or documents sentient. |
Quote:
|
I'm not concerned either way.
|
I doubt it's possible to create a sentient AI accidentally. If it happens, it will be because someone was trying to do it, and it will probably coincide with us learning exactly how brains create sentience.
|
Quote:
|
Quote:
That's a joke but it is a serious point as well - if it had an ability that was like human sentience one would expect such behaviour as it learns to articulate what it means. Look at the millennia of human attempts to define ourselves, to be able to explain our own inner worlds to others. |
Sheesh, another one of these useless "Is technical entity X sentient despite the fact that only biological entities which posses a brain can be sentient?" threads....
|
So, if you take all the calculations making this thing 'sentient', and do them by hand, would it still be sentient?
(I realise this is not an original question, I just like it) |
Quote:
|
Quote:
So no, you can't make a computer sentient by just writing programs that use very advanced math to solve specific problems. |
Quote:
It doesn't seem too "out-there" to speculate that there may be more than one way to be sentient - after all we have different mechanisms to achieve, movement, vision, reproduction and so on. |
I think it will be entirely accidental if we manage to create a true sentient AI.
|
Quote:
|
Quote:
|
Quote:
|
Quote:
|
Quote:
I do think that artificial sentience is probably possible. But I doubt this chatbot has it. Then again, have you seen the movie Her? I would like to know a little bit more about how the software works. Does it spend time thinking about things when it isn't answering people's questions? Or is it merely a program that takes a piece of input, runs it through an algorithm, and gives an output? And in between, merely waits patiently for the next piece of input. |
Quote:
Here's another funny claim: "Once we figured out how the speed of light works, there's no reason why we couldn't move faster than the speed of light! I mean it's nothing more than acceleration" Right? ETA: You would also need to back up the hilarious claim that "Our brains run some weird software". |
Quote:
|
Quote:
|
All times are GMT -7. The time now is 03:51 AM. |
Powered by vBulletin. Copyright ©2000 - 2022, Jelsoft Enterprises Ltd.
© 2015-22, TribeTech AB. All Rights Reserved.