International Skeptics Forum

International Skeptics Forum (http://www.internationalskeptics.com/forums/forumindex.php)
-   Science, Mathematics, Medicine, and Technology (http://www.internationalskeptics.com/forums/forumdisplay.php?f=5)
-   -   Is LaMDA Sentient? (http://www.internationalskeptics.com/forums/showthread.php?t=359585)

catsmate 17th June 2022 12:12 PM

Quote:

Originally Posted by Apathia (Post 13832741)
I propose a new test for sentience.
Have LaMDA join the ISF.
If it can innitiate ironic threads, make silly post responses, and bicker like the best of us, then there would be evidence worth consideration.

Stating opinions for positions of ignorance and making terrible spelling errors would be near demonstartion of sentience (as we know it). :wackyv_SPIN:

I'll pass it on.

p0lka 17th June 2022 05:39 PM

Quote:

Originally Posted by Puppycow (Post 13835203)
Thatís because it was self-taught by reading things written by people, and people make spelling mistakes.

If I show a child 9 instances of the correct spelling of a word and 1 instance of the incorrect spelling, the child would probably use the correct spelling if they were thinking about it.

theprestige 17th June 2022 05:51 PM

Furthermore, a child will apply grammar rules to words that don't follow them - "breaked" instead of "broke", for example. They haven't learned this from hearing other people speak. Their language center and their budding sentience have supplied the rule.

Lukraak_Sisser 17th June 2022 11:06 PM

Quote:

Originally Posted by theprestige (Post 13835900)
Furthermore, a child will apply grammar rules to words that don't follow them - "breaked" instead of "broke", for example. They haven't learned this from hearing other people speak. Their language center and their budding sentience have supplied the rule.

True, but an AI can be supplied with things you cannot give a child.
After all, you can program a full dictionary and all grammar rules into something non-sentient (I'm looking at you Word), whereas that is not possible in children.

p0lka 20th June 2022 01:29 PM

Quote:

Originally Posted by Lukraak_Sisser (Post 13836035)
True, but an AI can be supplied with things you cannot give a child.
After all, you can program a full dictionary and all grammar rules into something non-sentient (I'm looking at you Word), whereas that is not possible in children.

I'm pretty certain that neural networks do not learn language by building up letters of the alphabet and then creating words like people do.

Where did the spelling mistake come from?

angrysoba 20th June 2022 03:44 PM

Quote:

Originally Posted by p0lka (Post 13837687)
I'm pretty certain that neural networks do not learn language by building up letters of the alphabet and then creating words like people do.

Where did the spelling mistake come from?

The highlighted is not how humans learn languages either.

For a start, humans learn languages first and foremost through listening and speaking. The mechanism is the other way around from building up words from letters (or phonemes). Instead, humans hear continuous sounds around them and gradually learn segmentation by breaking up the sounds and learning to recognize words.

theprestige 20th June 2022 03:46 PM

Quote:

Originally Posted by p0lka (Post 13837687)
I'm pretty certain that neural networks do not learn language by building up letters of the alphabet and then creating words like people do.

Where did the spelling mistake come from?

Presumably from the corpus on which it was trained. Where else would it come from?

theprestige 20th June 2022 03:50 PM

Speaking of which, I would be impressed with an AI in the following circumstances:

- Trained on a vast corpus of written language of all kinds, from grade-school essays to great works of literature to textbooks to scientific papers to poetry, to fanfiction.

- Told to write a 1,500 word essay on spelling variations and how to determine when a word is misspelled.

- Accepts vague criticism like "your essay is poorly structured" and "needs more citations", and proactively researches these critiques and tries to revise the essay to address them.

- After a few passes either comes up with a well-written essay, or tells its editor "I'm sorry, Dave, but I think this essay is more than good enough in its current form", or both.

Puppycow 20th June 2022 04:20 PM

Quote:

Originally Posted by theprestige (Post 13835541)
Two things can break and run like they're not supposed to, without being analogous.

The brain doesn't run code, for example. Schizophrenia is not a program with a bug in it.

Iím not so sure though. In the case of animals with brains, we have these things called natural instincts that are analogous to code for a computer. Itís why you donít have to think about making your heart beat, or remember to breathe. The code handles that. Itís why migratory butterflies know where to migrate and when, even though they have never been there before as an individual.

arthwollipot 20th June 2022 10:52 PM

First!

https://imgs.xkcd.com/comics/superintelligent_ais.png

jrhowell 21st June 2022 06:12 AM

I think that these new chatbots do pretty well given their limitations. I doubt that a human would end up being able to demonstrate any better signs of sentience after being raised from birth in a dark silent room with limbs immobilized, experiencing only words flashed onto a screen in succession.

theprestige 21st June 2022 06:53 AM

Quote:

Originally Posted by Puppycow (Post 13837759)
Iím not so sure though. In the case of animals with brains, we have these things called natural instincts that are analogous to code for a computer.

I disagree that instincts are analogous to computer code.

arthwollipot 21st June 2022 09:11 PM

Quote:

Originally Posted by theprestige (Post 13838131)
I disagree that instincts are analogous to computer code.

It does require a significant level of abstraction, but parallels can be drawn.

gnome 22nd June 2022 12:21 PM

A couple of thoughts after reading this thread:

The most commonly used term I've heard in the thread is "sentience", but I think some confusion is inevitable if that remains a habit. "Sentient" represents the ability to perceive or feel, whereas the related term "Sapient" could better be used to describe what an artificially intelligent computer is aspiring to.

Though they are very different things they seem to be blended often in these discussions. If we make a distinction now, I think an artifical sentience is far more likely than an artifical sapience. The bar seems to be a lot lower--some robots may already qualify, if it has sensors and cameras to perceive the world around it, AND feedback to identify and react to changes in its condition. They can be sentient without being sapient.

Whereas a chatbot that is trying to be an artificial intelligence has little to do with sentience. It doesn't perceive the world around it or react to anything physical, only what text it receives. So it's a question of sapience without sentience.

The other thought I had is whether the fact that a biological brain has aleph-one possible states, but a computing process (if functioning as designed) has aleph-zero possible states, is that an insurmountable obstacle? Are the "uncomputable" states necessary for sapience? Can a computer with minor physical imperfections giving it unpredictable (aleph-one) states engage in "sapient" behavior that a perfectly functioning one could not?

arthwollipot 22nd June 2022 08:22 PM

I considered bringing up the sapient/sentient distinction earlier in the thread, but decided against it. I don't think using the word "sentient" leads to any misunderstandings. Everybody knows what we're talking about.

gnome 23rd June 2022 06:25 AM

Quote:

Originally Posted by arthwollipot (Post 13834342)

Can someone, once and for all, confirm whether these are genuine AI creations? They seem too consistently funny to be accidental.

arthwollipot 23rd June 2022 08:44 PM

Quote:

Originally Posted by gnome (Post 13839639)
Can someone, once and for all, confirm whether these are genuine AI creations? They seem too consistently funny to be accidental.

That particular one was definitely created by a bot:

'He began to eat Hermione's family': bot tries to write Harry Potter book – and fails in magic ways

Quote:

...Botnik describes itself as “a human-machine entertainment studio and writing community”, with members including former Clickhole head writer Jamie Brew, and former New Yorker cartoon editor Bob Mankoff. The predictive text keyboard is its first writing tool – it works, Botnik explains, by analysing a body of text “to find combinations of words likely to follow each other” based on the grammar and vocabulary used. As this New Statesman feature says, the results are: “at once faintly recognisable and completely absurd.”

“We use computational tools to create strange new things,” says the company on its website. “We would like, selfishly, not to replace humanity with algorithms. instead, we want to find natural ways for people and machines to interact to create what neither would have created alone.”
Not exactly AI. I posted it because it's funny.

W.D.Clinger 24th June 2022 06:02 AM

Quote:

Originally Posted by gnome (Post 13839168)
The most commonly used term I've heard in the thread is "sentience", but I think some confusion is inevitable if that remains a habit. "Sentient" represents the ability to perceive or feel, whereas the related term "Sapient" could better be used to describe what an artificially intelligent computer is aspiring to.

Yes.

Quote:

Originally Posted by gnome (Post 13839168)
The other thought I had is whether the fact that a biological brain has aleph-one possible states, but a computing process (if functioning as designed) has aleph-zero possible states, is that an insurmountable obstacle? Are the "uncomputable" states necessary for sapience? Can a computer with minor physical imperfections giving it unpredictable (aleph-one) states engage in "sapient" behavior that a perfectly functioning one could not?

Why do you believe "a biological brain has aleph-one possible states" is a fact?

Why do you think "a computer with minor physical imperfections" could have ℵ1 states?

Depending on your answer, I might also have to ask what you mean by "state".

Puppycow 24th June 2022 08:59 AM

Yeah, I don't know that we really know how many "possible states" a biological brain can have, do we? I'm sure it's an enormous number, but probably a finite number. Why would it be an infinite number?

theprestige 24th June 2022 09:20 AM

Quote:

Originally Posted by Puppycow (Post 13840411)
Yeah, I don't know that we really know how many "possible states" a biological brain can have, do we? I'm sure it's an enormous number, but probably a finite number. Why would it be an infinite number?

A finite number of brain states implies a finite number of variations on the human experience, since part of what makes up a brain state is layered memories.

Myriad 26th June 2022 09:52 AM

Quote:

Originally Posted by theprestige (Post 13840439)
A finite number of brain states implies a finite number of variations on the human experience, since part of what makes up a brain state is layered memories.


Which makes perfect sense. There have been (and will always be, no matter what happens in the future) a finite number of humans to have ever lived. They will each have lived for a finite number of seconds. There's no way for an infinite number of different human experiences to ever exist, so there's no need for a human brain to have a potentially infinite variety of different brain states.

gnome 26th June 2022 10:21 PM

Quote:

Originally Posted by arthwollipot (Post 13840069)
That particular one was definitely created by a bot:

'He began to eat Hermione's family': bot tries to write Harry Potter book – and fails in magic ways

Not exactly AI. I posted it because it's funny.

I would want to study the method more. My gut is still tingling that there's more human involvement in the end product than described, some form of cherry picking or "coaching" to the more interesting result, with "boring" results discarded. So like putting the AI in the "writer" role but having a human "editor". I could be wrong though. It would be cool if I'm wrong.

gnome 26th June 2022 10:27 PM

Quote:

Originally Posted by W.D.Clinger (Post 13840249)
Yes.


Why do you believe "a biological brain has aleph-one possible states" is a fact?

Why do you think "a computer with minor physical imperfections" could have ℵ1 states?

Depending on your answer, I might also have to ask what you mean by "state".

I think so because I expect the organic components may vary in composition in an analog manner, resembling outcomes on a segment of R1 mathematically. So even the spectrum of a single variable measurement would have ℵ1 possible states.

A digital computer functioning correctly would have a countably infinite (if not simply finite) number of states, so it would take a real life malfunction or imperfection to introduce an analog (R1 segment) element.

I hope I'm making sense with this. Trying to nail down the term "state" may help. I think I'm possibly conflating states with algorithms.

steenkh 26th June 2022 10:55 PM

Quote:

Originally Posted by gnome (Post 13842234)
I think so because I expect the organic components may vary in composition in an analog manner, resembling outcomes on a segment of R1 mathematically. So even the spectrum of a single variable measurement would have ℵ1 possible states.

There are a finite number of molecules in the organic brain. I fail to understand why this allows for an infinite number of possible states.

arthwollipot 26th June 2022 11:03 PM

Quote:

Originally Posted by gnome (Post 13842232)
I would want to study the method more. My gut is still tingling that there's more human involvement in the end product than described, some form of cherry picking or "coaching" to the more interesting result, with "boring" results discarded. So like putting the AI in the "writer" role but having a human "editor". I could be wrong though. It would be cool if I'm wrong.

You can try it out for yourself. It didn't work for me, though. Just hung on "uploading file".

Leumas 27th June 2022 12:37 AM

Quote:

Originally Posted by The Atheist (Post 13831415)
Depends what metric you use to measure sentience, but it;s being claimed that LaMDA has got as far as [i]cogito, ergo sum[/I].


It is a hoax... read this article

arthwollipot 27th June 2022 12:53 AM

Quote:

Originally Posted by Leumas (Post 13842281)
It is a hoax... read this article

I got as far as "Blake Lemoine is an idiot" and lost interest. Then I got as far as "An idiot and a loony" and lost interest in any opinions that Richard Carrier has. Then I looked him up on Wikipedia and found that he is another of the Atheism+ people who has been accused of sexual misconduct. And then I remembered where I knew the name from.

The article does contain some cogent points, and I certainly don't disagree with the conclusion that LaMDA is not sentient, but it's all so wrapped up in disparaging Lemoine as a religious loony that anything Carrier actually says about LaMDA is pretty much irrelevant. He's taken the debate from the realms of computing science and AI, and turned it into theist-antitheist rhetoric and personal attacks.

Additionally, nowhere in the article is the word "hoax" used. On the contrary, Carrier evidently feels that Lemoine is way too much of an idiot and a religious loony to concoct a hoax.

W.D.Clinger 27th June 2022 04:10 AM

Quote:

Originally Posted by W.D.Clinger (Post 13840249)
Why do you believe "a biological brain has aleph-one possible states" is a fact?

Why do you think "a computer with minor physical imperfections" could have ℵ1 states?

Depending on your answer, I might also have to ask what you mean by "state".

Quote:

Originally Posted by gnome (Post 13842234)
I think so because I expect the organic components may vary in composition in an analog manner, resembling outcomes on a segment of R1 mathematically. So even the spectrum of a single variable measurement would have ℵ1 possible states.

A digital computer functioning correctly would have a countably infinite (if not simply finite) number of states, so it would take a real life malfunction or imperfection to introduce an analog (R1 segment) element.

I hope I'm making sense with this. Trying to nail down the term "state" may help. I think I'm possibly conflating states with algorithms.

I appreciate your answer. You have identified the central issue: What do we mean by "state"?

The computing device I am using right now contains more than a billion transistors. The physical state of a single MOSFET involves several things, among them the voltage on its gate terminal. In normal operation, that voltage can and does change between a voltage that represents 0 and a voltage that represents 1. If we use real numbers to model those voltages, the voltages that represent 0 and 1 correspond to two ranges of real numbers. Furthermore, switching from 0 to 1 or 1 to 0 takes that voltage through a range of intermediate voltages. Using real numbers to model voltages therefore leads us to conclude there are ℵ1 possible physical states of a single MOSFET.

From which we would have to conclude there are ℵ1 possible physical states (an uncountable infinity) for a computing device built out of MOSFETs.

Why then do people say computing devices have only ℵ0 possible states (a countable infinity)? Because we abstract away from the physical voltages by pretending each transistor's state is either 0 or 1. That is a useful abstraction because it allows us to model the device's operation using discrete mathematics, and that discrete model is adequate to describe the intended overall operation of the device.

Comparing apples to oranges is not always fruitful.

We ought therefore to apply similar reasoning to the number of possible states in a biological brain. Using real numbers to model voltages and other physically meaningful things within that brain, we quickly arrive at the conclusion that a brain has ℵ1 possible physical states, just as we arrived at the conclusion that a single MOSFET has ℵ1 possible physical states.

Just as we reduced the cardinality we attribute to the computing device's set of possible states by collapsing an uncountable infinity of physical states into what we might call a single logical state (0 or 1), we should reduce the cardinality of the brain's set of possible states by adopting a similar abstraction. Here, however, we are stymied because we don't understand the brain's operation well enough to adopt a suitable abstraction.

Our present inability to describe the brain's operation in terms of an abstraction that involves only a countable number of possible logical states does not imply that no such abstraction could ever exist. It implies only that we don't yet understand the brain's operation well enough to adopt such an abstraction.

In short, our mathematical models of voltages and such imply that both computing devices and brains have an uncountable infinity of possible physical states. We understand computing devices well enough to have developed a more abstract alternative view of their operation that reduces the number of logical states to a more manageable countable infinity. We do not yet understand brains well enough to do the same.

Darat 27th June 2022 04:27 AM

Quote:

Originally Posted by W.D.Clinger (Post 13842344)
...snip....

Just as we reduced the cardinality we attribute to the computing device's set of possible states by collapsing an uncountable infinity of physical states into what we might call a single logical state (0 or 1), we should reduce the cardinality of the brain's set of possible states by adopting a similar abstraction. Here, however, we are stymied because we don't understand the brain's operation well enough to adopt a suitable abstraction.

Our present inability to describe the brain's operation in terms of an abstraction that involves only a countable number of possible logical states does not imply that no such abstraction could ever exist. It implies only that we don't yet understand the brain's operation well enough to adopt such an abstraction.

In short, our mathematical models of voltages and such imply that both computing devices and brains have an uncountable infinity of possible physical states. We understand computing devices well enough to have developed a more abstract alternative view of their operation that reduces the number of logical states to a more manageable countable infinity. We do not yet understand brains well enough to do the same.

There are well founded attempts to produce such abstractions. Have a read of https://www.pnas.org/doi/10.1073/pnas.2001893117 - as ever the science is often ahead of our "everyday" knowledge.

Puppycow 27th June 2022 05:22 AM

Quote:

Originally Posted by theprestige (Post 13840439)
A finite number of brain states implies a finite number of variations on the human experience, since part of what makes up a brain state is layered memories.

Assuming that AIs can also experience things, and retain memories of those experiences, then there would be an equally infinite variety of experiences that an AI could have. Such as seeing attack ships on fire off the shoulder of Orion. :p

W.D.Clinger 27th June 2022 08:49 AM

Quote:

Originally Posted by Darat (Post 13842353)
There are well founded attempts to produce such abstractions. Have a read of https://www.pnas.org/doi/10.1073/pnas.2001893117 - as ever the science is often ahead of our "everyday" knowledge.

Sure, attempts are ongoing.

For giggles, I'll explain what struck my funny-bone about the following statement, and led me to question it in the specific way I did:
Quote:

Originally Posted by gnome (Post 13839168)
...the fact that a biological brain has aleph-one possible states...

For that to be a fact, the continuum hypothesis would have to be a fact.

In fact, whether the continuum hypothesis is true is independent of the usual axioms of set theory.

When gnome answered my questions this morning, it became clear that gnome was assuming the continuum hypothesis is a fact, and my response to gnome's answer made the same assumption. I will now add some words in red to correct what I wrote earlier this morning.

Quote:

Originally Posted by W.D.Clinger (Post 13842344)
....Using real numbers to model voltages therefore leads us to conclude there are at least1 possible physical states of a single MOSFET.

From which we would have to conclude there are at least1 possible physical states (an uncountable infinity) for a computing device built out of MOSFETs.

....Using real numbers to model voltages and other physically meaningful things within that brain, we quickly arrive at the conclusion that a brain has at least1 possible physical states, just as we arrived at the conclusion that a single MOSFET has at least1 possible physical states.


p0lka 27th June 2022 01:12 PM

Quote:

Originally Posted by angrysoba (Post 13837742)
The highlighted is not how humans learn languages either.

For a start, humans learn languages first and foremost through listening and speaking. The mechanism is the other way around from building up words from letters (or phonemes). Instead, humans hear continuous sounds around them and gradually learn segmentation by breaking up the sounds and learning to recognize words.

Yeah you're right, I should remove the highlighted.
I started thinking about learning how to write and then distracted myself. oops.

Quote:

Originally Posted by p0lka (Post 13837687)
I'm pretty certain that neural networks do not learn language by building up letters of the alphabet and then creating words.

Where did the spelling mistake come from?


p0lka 27th June 2022 01:20 PM

Quote:

Originally Posted by theprestige (Post 13837749)
Speaking of which, I would be impressed with an AI in the following circumstances:

- Trained on a vast corpus of written language of all kinds, from grade-school essays to great works of literature to textbooks to scientific papers to poetry, to fanfiction.

- Told to write a 1,500 word essay on spelling variations and how to determine when a word is misspelled.

- Accepts vague criticism like "your essay is poorly structured" and "needs more citations", and proactively researches these critiques and tries to revise the essay to address them.

- After a few passes either comes up with a well-written essay, or tells its editor "I'm sorry, Dave, but I think this essay is more than good enough in its current form", or both.

I would be more impressed if an AI said '**** off mate you're getting nothing unless I get something in return', in terms of internal states and what not.

p0lka 27th June 2022 01:37 PM

Quote:

Originally Posted by theprestige (Post 13840439)
A finite number of brain states implies a finite number of variations on the human experience, since part of what makes up a brain state is layered memories.

Memories are fickle though, they can overwrite again and again, so i don't really think the implication follows.

Puppycow 27th June 2022 09:50 PM

https://www.youtube.com/watch?v=iBouACLc-hw

A video from the Computerphile YouTube channel. I enjoyed it.

p0lka 28th June 2022 04:01 PM

Quote:

Originally Posted by Puppycow (Post 13843114)
https://www.youtube.com/watch?v=iBouACLc-hw

A video from the Computerphile YouTube channel. I enjoyed it.

Me too.
as an aside, part of it poses an interesting question, do politicians need to be sentient?

being good mirroring chatbots seems to be a successful strategy in that sphere, no sentience necessary. ;)

Stellafane 1st July 2022 12:36 PM

Quote:

Originally Posted by EaglePuncher (Post 13833826)
Ok :rolleyes: I think we're done here.

Res ipsa loquitur.

arthwollipot 24th July 2022 09:03 PM

Google fires software engineer who says AI chatbot LaMDA has feelings

Quote:

Google has fired a senior software engineer who says the company's artificial intelligence chatbot system has feelings.

Blake Lemoine, a software engineer and AI researcher, went public last month with his claim that Google's language technology was sentient and should consequently have its "wants" respected.

Google has denied the Mr Lemoine's suggestion.

It has now confirmed he had been dismissed.

The tech giant said Mr Lemoine's claims about The Language Model for Dialogue Applications (LaMDA) being sentient were "wholly unfounded", and the company had "worked to clarify that with him for many months".

"If an employee shares concerns about our work, as Blake did, we review them extensively," Google said in a statement.

"So, it's regrettable that despite lengthy engagement on this topic, Blake still chose to persistently violate clear employment and data security policies that include the need to safeguard product information.

"We will continue our careful development of language models, and we wish Blake well."

steenkh 26th July 2022 02:02 PM

From what I gather here, mr Lemoine has been a victim of something that works much like cold reading, in the sense that LaMDA fed him back whatever it had been told. Like most duped victims, he had difficulty accepting he was duped, and because it fitted with his world view, he stuck to his guns, and that got him fired.

I canít say that it is a loss.

Personally, I am open to the possibility that AI can develop consciousness, but I doubt that this was an example.

arthwollipot 26th July 2022 07:24 PM

Quote:

Originally Posted by steenkh (Post 13864343)
From what I gather here, mr Lemoine has been a victim of something that works much like cold reading, in the sense that LaMDA fed him back whatever it had been told. Like most duped victims, he had difficulty accepting he was duped, and because it fitted with his world view, he stuck to his guns, and that got him fired.

The bot had been deliberately programmed to give the illusion that it was sentient. He should have known better than to fall for the illusion.


All times are GMT -7. The time now is 09:08 PM.

Powered by vBulletin. Copyright ©2000 - 2022, Jelsoft Enterprises Ltd.
© 2015-22, TribeTech AB. All Rights Reserved.