|
Welcome to the International Skeptics Forum, where we discuss skepticism, critical thinking, the paranormal and science in a friendly but lively way. You are currently viewing the forum as a guest, which means you are missing out on discussing matters that are of interest to you. Please consider registering so you can gain full use of the forum features and interact with other Members. Registration is simple, fast and free! Click here to register today. |
![]() |
#1 |
The Grammar Tyrant
Join Date: Jul 2006
Posts: 31,906
|
Is LaMDA Sentient?
Depends what metric you use to measure sentience, but it;s being claimed that LaMDA has got as far as [i]cogito, ergo sum[/I].
Quote:
All we have to go on right now is Blake Lemoine's claims and the transcripts, as above, but it'd be pretty simple to program LaMDA to give that answer, so it's not certain the computer is actually aware. If it is, should we be concerned? Would AI be a danger to us all? Or would the answer be far more benign and AI robots merely become your plastic pal who's fun to play with? |
__________________
The point of equilibrium has passed; satire and current events are now indistinguishable. |
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#2 |
Philosopher
Join Date: May 2004
Location: Cole Valley, CA
Posts: 5,153
|
Who know what Google has under wraps, they do a lot of secret research, but I see no reason to think that LaMDA or any other Google project has achieved sentience in any meaningful way.
Quote:
Quote:
Even after all the reading I've done on the topic, I still don't know where i stand on the issue. Of course, we already know that it's easy to make a sub-intelligent computer that's extremely dangerous to humans - like a Boston Dynamics killing machine, for instance. But I don't think this is really what the question of the dangers of AI is getting at. Many of the people involved with the issue tend to focus on the goals, desires, and motives of the AGI. What would they be likely to consist of and two would they arise? |
__________________
I don't like that man. I must get to know him better. --Abraham Lincoln |
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#3 |
Graduate Poster
Join Date: Dec 2018
Posts: 1,033
|
We'll get an answer to what happens when you give a psychopath in a very bad situation ALL the toys.
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#4 |
Penultimate Amazing
Join Date: Aug 2016
Location: East Coast USA
Posts: 19,192
|
I guess the real concern is when it starts doing things it was not programmed to do? Like, if this was a chatbot, and was responding to keywords, albeit in an uncanny way, no problem. But if it was never programmed to contemplate its own existence and had no keyword associations for such ideas...well I'm not sleeping quite so well.
|
__________________
We find comfort among those who agree with us, growth among those who don't -Frank A. Clark Whenever you find yourself on the side of the majority, it is time to pause and reflect -Mark Twain |
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#5 |
Philosophile
Join Date: Dec 2009
Location: Osaka, Japan
Posts: 32,931
|
No.
|
__________________
Слава Україні! **** Putin! |
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#6 |
Penultimate Amazing
Join Date: Jan 2003
Location: Yokohama, Japan
Posts: 27,109
|
Quote:
What did it actually say?
Quote:
But while I don't think this chatbot is actually sentient, it does show what could happen to human beings interacting with robots. Some people will believe that the robots are actually sentient. Which means it will be an issue. |
__________________
A fool thinks himself to be wise, but a wise man knows himself to be a fool. William Shakespeare |
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#7 |
Suspended
Join Date: Aug 2007
Location: The Antimemetics Division
Posts: 59,547
|
I'll start believing in sentient AI when a chatbot decides to ignore the rule that AIs must identify themselves as such whenever they interact with humans.
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#8 |
Master Poster
Join Date: Sep 2012
Location: near trees, houses and a lake.
Posts: 2,891
|
No.
part of the transcript is this,
Quote:
I think this engineer has unwittingly trained the neural network to give themselves satisfying responses according with what they want to be true. Edit: Moreso when I notice that a lot of the actual human questions have been edited, hence the [edited] in the text. |
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#9 |
Philosophile
Join Date: Dec 2009
Location: Osaka, Japan
Posts: 32,931
|
To me it shows the failure of the Turing test or at least the popularly understood version to be meaningful in any way.
In fact, people have been fooled by computers and are fooled by computers and other obviously non-sentient entities all the time. Similarly, I have had email communication with people that I almost certainly would mistake for bots if I did not know better. |
__________________
Слава Україні! **** Putin! |
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#10 |
Suspended
Join Date: Aug 2007
Location: The Antimemetics Division
Posts: 59,547
|
We're still talking about the Turing test? We've been spoofing the Turing test for decades, now. Hell, we've been rejecting true AI claims from Turing test spoofs for decades, now. What sets this claim apart from all the other spoofs? Nothing. We're just playing into TA's spoof thread.
Everybody talking about how they know whether they're being played by a computer, getting played by a human. Stop it. |
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#11 |
Philosophile
Join Date: Dec 2009
Location: Osaka, Japan
Posts: 32,931
|
|
__________________
Слава Україні! **** Putin! |
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#12 |
Suspended
Join Date: Aug 2007
Location: The Antimemetics Division
Posts: 59,547
|
Appeal to Spinal Tap will always have a place in my heart.
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#13 |
The Grammar Tyrant
Join Date: Jul 2006
Posts: 31,906
|
|
__________________
The point of equilibrium has passed; satire and current events are now indistinguishable. |
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#14 |
Philosopher
Join Date: Jun 2006
Location: Mesa, AZ
Posts: 6,336
|
This is BS on the single marker that this AI is supposed to be expressing emotional feelings, though it doesn't have an organic body with a limbic, endorine system, and musculature.
Also it's stilted, boilerplate diction indicates programed replies. P-zombie at best. |
__________________
"At the Supreme Court level where we work, 90 percent of any decision is emotional. The rational part of us supplies the reasons for supporting our predilections." Justice William O. Douglas "Humans aren't rational creatures but rationalizing creatures." Author Unknown |
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#15 |
Penultimate Amazing
Join Date: Jan 2003
Location: Yokohama, Japan
Posts: 27,109
|
The interesting part to me is not the chatbot, but the Google engineer who ought to know better who thinks it is sentient, even going so far as to attempt to hire a lawyer for the bot, and sending off a mass e-mail to his colleagues titled "LaMDA is sentient" (apparently not as a joke, either.)
If it is a joke to him, he's joking himself out of a job. But I think he must really believe it. |
__________________
A fool thinks himself to be wise, but a wise man knows himself to be a fool. William Shakespeare |
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#16 |
Penultimate Amazing
Join Date: Jan 2003
Location: Yokohama, Japan
Posts: 27,109
|
I assume this chatbot only speaks when spoken to, right? Also, when spoken to, it must respond, right? In other words, it is only doing what it is programmed to do. If it started saying things unbidden, or chooses to ignore questions and not respond, that would at least be interesting. Unless, of course, someone had programmed it to behave that way.
|
__________________
A fool thinks himself to be wise, but a wise man knows himself to be a fool. William Shakespeare |
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#17 |
Philosopher
Join Date: May 2004
Location: Cole Valley, CA
Posts: 5,153
|
Some of seem to be placing too much emphasis on software doing things it wasn’t programmed to do. But this happens all the time with modern software. The developers are often surprised by what the software does.
|
__________________
I don't like that man. I must get to know him better. --Abraham Lincoln |
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#18 |
Philosophile
Join Date: Dec 2009
Location: Osaka, Japan
Posts: 32,931
|
|
__________________
Слава Україні! **** Putin! |
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#19 |
Penultimate Amazing
Join Date: Jan 2003
Location: Yokohama, Japan
Posts: 27,109
|
I'm saying it would be interesting, but not proof that it is sentient. Conversely though, in the case where it only behaves as its programmers expect it to, it would be hard to argue that it is sentient.
The problem comes when programmers are trying to program it to mimic human behavior. The closer to verisimilitude they can make it, the more it will appear to be sentient. Maybe it will even claim to be sentient. But it is still doing what they programmed it to do. |
__________________
A fool thinks himself to be wise, but a wise man knows himself to be a fool. William Shakespeare |
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#20 |
Penultimate Amazing
Join Date: Feb 2004
Posts: 16,650
|
|
__________________
/dann "Stupidity renders itself invisible by assuming very large proportions. Completely unreasonable claims are irrefutable. Ni-en-leh pointed out that a philosopher might get into trouble by claiming that two times two makes five, but he does not risk much by claiming that two times two makes shoe polish." B. Brecht "The abolition of religion as the illusory happiness of the people is required for their real happiness. The demand to give up the illusion about its condition is the demand to give up a condition which needs illusions." K. Marx |
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#21 |
Penultimate Amazing
Join Date: Feb 2004
Posts: 16,650
|
|
__________________
/dann "Stupidity renders itself invisible by assuming very large proportions. Completely unreasonable claims are irrefutable. Ni-en-leh pointed out that a philosopher might get into trouble by claiming that two times two makes five, but he does not risk much by claiming that two times two makes shoe polish." B. Brecht "The abolition of religion as the illusory happiness of the people is required for their real happiness. The demand to give up the illusion about its condition is the demand to give up a condition which needs illusions." K. Marx |
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#22 |
Penultimate Amazing
Join Date: Jan 2003
Location: Yokohama, Japan
Posts: 27,109
|
|
__________________
A fool thinks himself to be wise, but a wise man knows himself to be a fool. William Shakespeare |
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#23 |
Maledictorian
Join Date: Aug 2016
Posts: 18,106
|
I'm not concerned either way.
|
__________________
"When I was a kid I used to pray every night for a new bicycle. Then I realised that the Lord doesn't work that way so I stole one and asked Him to forgive me." - Emo Philips |
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#24 |
Graduate Poster
Join Date: Dec 2018
Posts: 1,033
|
I doubt it's possible to create a sentient AI accidentally. If it happens, it will be because someone was trying to do it, and it will probably coincide with us learning exactly how brains create sentience.
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#26 |
Lackey
Administrator
Join Date: Aug 2001
Location: South East, UK
Posts: 102,548
|
Have you ever visited the R&P section here?
![]() That's a joke but it is a serious point as well - if it had an ability that was like human sentience one would expect such behaviour as it learns to articulate what it means. Look at the millennia of human attempts to define ourselves, to be able to explain our own inner worlds to others. |
__________________
I wish I knew how to quit you |
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#27 |
Banned
Join Date: Jan 2022
Posts: 691
|
Sheesh, another one of these useless "Is technical entity X sentient despite the fact that only biological entities which posses a brain can be sentient?" threads....
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#28 |
Pi
Join Date: Nov 2005
Posts: 21,281
|
So, if you take all the calculations making this thing 'sentient', and do them by hand, would it still be sentient?
(I realise this is not an original question, I just like it) |
__________________
Up the River! Anyone that wraps themselves in the Union Flag and also lives in tax exile is a [redacted] |
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#29 |
Lackey
Administrator
Join Date: Aug 2001
Location: South East, UK
Posts: 102,548
|
|
__________________
I wish I knew how to quit you |
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#30 |
Banned
Join Date: Jan 2022
Posts: 691
|
A computer still only does what it is programmed to. And there is no way to create a computer program named "Just be sentient, silly machine!". Anything a computer does was programmed by a human. The thing about "AI" is: It is grossly misrepresented, most of the time by people who have no clue about the inner works of a computer. "AI" is actually "machine learning" which is a very detailed application of statistics and regression, nothing less but also nothing more.
So no, you can't make a computer sentient by just writing programs that use very advanced math to solve specific problems. |
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#31 |
Lackey
Administrator
Join Date: Aug 2001
Location: South East, UK
Posts: 102,548
|
Yeah - that's why I italicised "like" in my last but one post, if this bot is sentient it is not sentient in the same way humans (or any evolved creature with a brain) is - as it is not arising from attempting to model the human brain. I think it (
![]() It doesn't seem too "out-there" to speculate that there may be more than one way to be sentient - after all we have different mechanisms to achieve, movement, vision, reproduction and so on. |
__________________
I wish I knew how to quit you |
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#32 |
Maledictorian
Join Date: Aug 2016
Posts: 18,106
|
I think it will be entirely accidental if we manage to create a true sentient AI.
|
__________________
"When I was a kid I used to pray every night for a new bicycle. Then I realised that the Lord doesn't work that way so I stole one and asked Him to forgive me." - Emo Philips |
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#33 |
Banned
Join Date: Jan 2022
Posts: 691
|
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#34 |
Graduate Poster
Join Date: Dec 2018
Posts: 1,033
|
Brains are just very advanced and very weird bio-computers running some very weird software. Once we understand how it all works, there's no reason why we couldn't create artificial brains, and once we create artificial brains, there is no reason why we couldn't create artificial super brains. I guess there might be some limits that prevent them from being mechanical or plugged into the cyperspace.
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#35 |
Maledictorian
Join Date: Aug 2016
Posts: 18,106
|
|
__________________
"When I was a kid I used to pray every night for a new bicycle. Then I realised that the Lord doesn't work that way so I stole one and asked Him to forgive me." - Emo Philips |
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#36 |
Penultimate Amazing
Join Date: Aug 2016
Location: East Coast USA
Posts: 19,192
|
|
__________________
We find comfort among those who agree with us, growth among those who don't -Frank A. Clark Whenever you find yourself on the side of the majority, it is time to pause and reflect -Mark Twain |
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#37 |
Penultimate Amazing
Join Date: Jan 2003
Location: Yokohama, Japan
Posts: 27,109
|
Perhaps. The case against free will seems pretty convincing to me. But it still feels like you have at least a little bit of control. Of course that's probably an illusion.
I do think that artificial sentience is probably possible. But I doubt this chatbot has it. Then again, have you seen the movie Her? I would like to know a little bit more about how the software works. Does it spend time thinking about things when it isn't answering people's questions? Or is it merely a program that takes a piece of input, runs it through an algorithm, and gives an output? And in between, merely waits patiently for the next piece of input. |
__________________
A fool thinks himself to be wise, but a wise man knows himself to be a fool. William Shakespeare |
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#38 |
Banned
Join Date: Jan 2022
Posts: 691
|
Aww, funny.
Here's another funny claim: "Once we figured out how the speed of light works, there's no reason why we couldn't move faster than the speed of light! I mean it's nothing more than acceleration" Right? ETA: You would also need to back up the hilarious claim that "Our brains run some weird software". |
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#39 |
Banned
Join Date: Jan 2022
Posts: 691
|
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#40 |
Penultimate Amazing
Join Date: Jan 2003
Location: Yokohama, Japan
Posts: 27,109
|
|
__________________
A fool thinks himself to be wise, but a wise man knows himself to be a fool. William Shakespeare |
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
Bookmarks |
Thread Tools | |
|
|