International Skeptics Forum

International Skeptics Forum (http://www.internationalskeptics.com/forums/forumindex.php)
-   Science, Mathematics, Medicine, and Technology (http://www.internationalskeptics.com/forums/forumdisplay.php?f=5)
-   -   Is LaMDA Sentient? (http://www.internationalskeptics.com/forums/showthread.php?t=359585)

Jimbo07 13th June 2022 10:58 AM

Quote:

Originally Posted by EaglePuncher (Post 13832124)
Sentience is inherent to brains. Sentience is not inherent to machines.

So... does the brain have a special property that makes it not a machine?

Quote:

Originally Posted by Darat
Well this takes us back to the opening post and the topic of the thread...

Currently I think it is fair to consider it very very unlikely that current commercially lead "AI" research will result in any kind of sentience - because that isn't the goal. Indeed the term "artificial intelligence" itself is probably not a very useful term now as it is as much a marketing phrase as it is a scientific term.

There is still research into the old fashioned kind of AI - the "I compute therefore I think" goal and also there is a lot of research from the direction of neuroscience in trying to understand how brains work - it is from that arena that I suspect if we can ever create a non-evolved-creature sentience that we will see it happening.

What I find very exciting is how "low" we've got to modelling brain functionality i.e. down to assemblies, it will be very interesting to see if they can start to generate the "higher" level of brain functions from these low-level models which is how the smart money is betting the brain works - emergent properties on the back of emergent properties with feedback thrown in.

The part that I've found most interesting is showing humans how little we know about what intelligence is, given prior assumptions of what demonstrates intelligence!

Darat 13th June 2022 11:00 AM

Quote:

Originally Posted by EaglePuncher (Post 13832136)
Oh, the same argument that the Hydrino folks make. Well done! :rolleyes:

Nope - hydrinos don't exist, sentience does exist ... or so I'm told....

theprestige 13th June 2022 11:06 AM

Quote:

Originally Posted by EaglePuncher (Post 13832093)
Given enough energy, humans could create a star?



Exactly, we are not yet able to create a star : rolleyes :

Apples and oranges. We know exactly how stars work. We can, in fact, create the core stellar process at will. We know exactly how to create actual stars. The only barrier to creating a star is one of cost: Assembling enough matter to trigger gravity-induced fusion.

Contrast with sentient artificial brains: In terms of actual cost, they're probably a lot cheaper than creating artificial stars. Just look at how many stars there are in our solar system, compared to how many sentient brains there are. So we probably have more than enough money (resources, energy) to create an artificial brain. We just don't really even know where to begin, yet.

We can't even create the core brain process at all.

---

Somebody should probably have warned you that trying to change people's minds by using an analogy wasn't going to work.

Darat 13th June 2022 11:11 AM

Quote:

Originally Posted by Jimbo07 (Post 13832139)
...snip...

The part that I've found most interesting is showing humans how little we know about what intelligence is, given prior assumptions of what demonstrates intelligence!

I think how we struggle to define it sort of gives it away that it isn't what we think it is! :D

Thermal 13th June 2022 11:16 AM

Quote:

Originally Posted by theprestige (Post 13832146)
Somebody should probably have warned you that trying to change people's minds by using an analogy wasn't going to work.

Sure you can. "Disposable diapers are not even theoretically possible. Show me how something can be disposed of in the macroecological sense. Conservation of matter forbids this. Of course the whole thing is still full of ****, but at least you can toss it and move on, unlike the current argument"

Darat 13th June 2022 11:20 AM

Quote:

Originally Posted by theprestige (Post 13832146)
...snip...

We can't even create the core brain process at all.

...snip...

Depends on how you mean that. But I do agree with your point.

When you think ;) about it we are still struggling to create machines that can replace our "simpler" organs such as the heart, lungs and kidneys and we pretty much know how they work as physical "machines", never mind an organ the complexity of the liver, I suspect that hubris is involved in our current thinking about us understanding and replicating the brain.

Thermal 13th June 2022 11:27 AM

Quote:

Originally Posted by Darat (Post 13832159)
Depends on how you mean that. But I do agree with your point.

When you think ;) about it we are still struggling to create machines that can replace our "simpler" organs such as the heart, lungs and kidneys and we pretty much know how they work as physical "machines", never mind an organ the complexity of the liver, I suspect that hubris is involved in our current thinking about us understanding and replicating the brain.

But the brain, as an organ, does not need to be our model. We are seeking to replicate one of its processes. Not controlling a central nervous system and other bodily functions, or interpreting the senses, or creating a living organ that can reconfigure itself when damaged (sometimes). We just want to mimick a process that need not be nearly as complex as that which a living brain produces.

catsmate 13th June 2022 11:44 AM

No. Good grief no.

Jimbo07 13th June 2022 11:50 AM

Quote:

Originally Posted by Darat (Post 13832152)
I think how we struggle to define it sort of gives it away that it isn't what we think it is! :D

Very few statements here I've agreed with more!

...

... just so long as whoever's speaking isn't using this as a wedge or door to introduce magic and so on.

EaglePuncher 13th June 2022 12:06 PM

Quote:

Originally Posted by Darat (Post 13832140)
Nope - hydrinos don't exist, sentience does exist ... or so I'm told....

Sentient machines do not exist and no one knows how to create one, what was your point again?

The Atheist 13th June 2022 12:21 PM

Quote:

Originally Posted by Puppycow (Post 13831910)
Well, we've already figured out how to make computers play chess better than any human being can.

Chess is easy - it's a series of calculations and I was always surprised it took so long to beat a grandmaster.

Now, give me a computer that can beat a human at Spades and I'll be impressed.

Quote:

Originally Posted by Thermal (Post 13832091)
And here I was thinking this was going to be a lucid discussion about the OP.

Me too. Once the typical stupid replies at the start had been passed, I thought a decent discussion might be possible.

Mea culpa.

Olmstead 13th June 2022 01:00 PM

Quote:

Originally Posted by EaglePuncher (Post 13832188)
Sentient machines do not exist and no one knows how to create one, what was your point again?

I don't know about you, but I'm pretty sure I'm a sentient machine. I also follow commands with some pretty simple objectives, although a lot of the in-between stuff has become somewhat muddled.

Crossbow 13th June 2022 01:47 PM

Quote:

Originally Posted by Olmstead (Post 13832225)
I don't know about you, but I'm pretty sure I'm a sentient machine. I also follow commands with some pretty simple objectives, although a lot of the in-between stuff has become somewhat muddled.

Thanks much.

Your response is quite accurate and quite appropriate.

:)

Myriad 13th June 2022 01:56 PM

Quote:

Originally Posted by EaglePuncher (Post 13832087)
You're the one who claims it's not impossible so why don't you give us a raw outline of how to actually do it?

I explained how computers work a couple of times by now.


You haven't explained how computers work, not even at a "raw outline" level. You've claimed that they only calculate, but calculation is just one kind of information processing. Computers can do all the kinds of information processing that are known to exist. So your understanding of what computers do and how they do it appears to be a layman's level from fifty years ago.

The process I described is information processing. Due to the complexity of the patterns in the input stream, it's (as I already said) a particularly challenging task, which is why it took so long for brains on earth to evolve to be able to do it. In present-day context of highly advanced computing "challenging" means a raw outline of how to actually do it would be at least book length and highly technical. But the task is by its nature entirely within the realm of information processing. Neural tissue is effective at information processing and so is digital electronics. That's why digital electronics can simulate neurons and brains can design digital electronics.

EaglePuncher 13th June 2022 02:24 PM

Quote:

Originally Posted by Myriad (Post 13832260)
You haven't explained how computers work, not even at a "raw outline" level. You've claimed that they only calculate, but calculation is just one kind of information processing. Computers can do all the kinds of information processing that are known to exist. So your understanding of what computers do and how they do it appears to be a layman's level from fifty years ago.

Honestly, the highlight fits you way better, because you seem to imply that a computer built in 2022 works somehow different from a computer built in 1980.


Quote:

Originally Posted by Myriad (Post 13832260)
The process I described is information processing. Due to the complexity of the patterns in the input stream, it's (as I already said) a particularly challenging task, which is why it took so long for brains on earth to evolve to be able to do it. In present-day context of highly advanced computing "challenging" means a raw outline of how to actually do it would be at least book length and highly technical. But the task is by its nature entirely within the realm of information processing. Neural tissue is effective at information processing and so is digital electronics. That's why digital electronics can simulate neurons and brains can design digital electronics.




And yet they can't design a sentient machine, only machines that appear to be sentient (at best) for a layman, but for some reason the latter type of machine gets some people really really aroused. Also, your precious "information processing" is still nothing more than applied statistics :rolleyes:

3point14 13th June 2022 02:55 PM

Quote:

Originally Posted by EaglePuncher (Post 13832283)
...machines that appear to be sentient...


Doesn't this describe people?

lionking 13th June 2022 03:02 PM

Quote:

Originally Posted by The Atheist (Post 13832201)
Chess is easy - it's a series of calculations and I was always surprised it took so long to beat a grandmaster.

The recent big development though is the AI Alpha Zero thrashing the computing beast Stockfish (which had already disposed of many Grandmasters). According to reports Alpha Zero was fed the rules of chess and took apart Stockfish with only a fraction of its computing power. The games were beautiful to watch with AZ breaking all the conventional rules like moving the same piece twice in the opening 10 moves and advancing pawns in seemingly ridiculous ways.

While Stockfish was calculating the outcome of millions of moves AZ seemed to be looking at patterns. Yes, only a narrow application but a good illustration of the power of AI programs.

Jimbo07 13th June 2022 03:07 PM

Quote:

Originally Posted by lionking (Post 13832309)
While Stockfish was calculating the outcome of millions of moves AZ seemed to be looking at patterns. Yes, only a narrow application but a good illustration of the power of AI programs.

So Chess seems like a limited domain. What I'll find interesting is when "AI" starts crushing us at everything (Chess, art, driving... oh, those are already done). At what point does our sentience become not so vaunted, anyway?

ETA: Another angle is, I foresee a future where people sitting around debating the rights of AIs, while the AIs kicking down our doors demanding them!

Myriad 13th June 2022 03:25 PM

Quote:

Originally Posted by EaglePuncher (Post 13832283)
Honestly, the highlight fits you way better, because you seem to imply that a computer built in 2022 works somehow different from a computer built in 1980.


Software written in 2022 does work different from software written in 1980. Neural nets and genetic algorithms were barely known in 1980 and are now routine tools. The processors are correspondingly more capable with for instance more algorithms implemented in hardware, as well as greater speed.

Does the fact that the underlying functionality of logic gates hasn't changed seem important to you? The underlying physics of protein chemistry hasn't changed since the Cambrian era (or ever, as far as we know), does that mean our brains work the same as trilobite brains?

sir drinks-a-lot 13th June 2022 03:33 PM

Quote:

Originally Posted by Myriad (Post 13832319)
Software written in 2022 does work different from software written in 1980. Neural nets and genetic algorithms were barely known in 1980 and are now routine tools. The processors are correspondingly more capable with for instance more algorithms implemented in hardware, as well as greater speed.

Does the fact that the underlying functionality of logic gates hasn't changed seem important to you? The underlying physics of protein chemistry hasn't changed since the Cambrian era (or ever, as far as we know), does that mean our brains work the same as trilobite brains?

I think youíre right, itís a software problem. All computation is the same. I believe sentient machines are possible, but the current approaches Iím aware of are not heading in the right direction. Most of the efforts are going into machine learning.

theprestige 13th June 2022 03:40 PM

I think it's an overall resources problem. Human consciousness is a dynamic electrochemical state comprised of many overlapping feedback loops.

It's not clear to me that it's possible to arrange enough hardware and software to replicate that level of complexity and emergent order.

I wonder if trying to feed the right Turing tape algorithm into the right arrangement of logic gates is even the right approach. Human thought isn't algorithms passing through logic gates, after all.

Maybe the right approach is to pump electricity through chemical baths until we hit on a recipe that says "ow!"

sir drinks-a-lot 13th June 2022 04:32 PM

Quote:

Originally Posted by theprestige (Post 13832327)
Maybe the right approach is to pump electricity through chemical baths until we hit on a recipe that says "ow!"

The analog chemical soup approach may be the way to go, but I think it can be done digitally.

theprestige 13th June 2022 04:47 PM

In that case the secret will probably be in fluid dynamics modeling, adapted to modeling feedback loops in chemical solutions, rather than genetic algorithms.

angrysoba 13th June 2022 08:05 PM

1 Attachment(s)
Pffffft!

As I thought. This story is utter crap.

Thermal 13th June 2022 08:11 PM

So we are really trying to figure out if LaMDA is sentient, or saved?

Puppycow 13th June 2022 11:36 PM

Quote:

Originally Posted by angrysoba (Post 13832495)
Pffffft!

As I thought. This story is utter crap.

Is that definitely the same individual who was suspended from Google?

Just want to be sure it isn't a troll impersonating him or something.

Odd that a software engineer would also be a priest, but not impossible.




So, Turing test passed, I guess. But I don't buy it myself. I think it's just a chatbot that managed to fool one person.

angrysoba 14th June 2022 12:08 AM

Quote:

Originally Posted by Puppycow (Post 13832598)
Is that definitely the same individual who was suspended from Google?

Just want to be sure it isn't a troll impersonating him or something.

Odd that a software engineer would also be a priest, but not impossible.




So, Turing test passed, I guess. But I don't buy it myself. I think it's just a chatbot that managed to fool one person.

The feed is of someone who seems to be involved in Google and AI.

I literally cannot understand how the Turing Test has any relevance to sentience. It clearly cannot be a necessary criteria as we are pretty sure that all kinds of creatures that have no language skills are nontheless sentient, and I don't see why we should assume it is a sufficient criteria either given that at best all we can argue is that a bot can be good at presenting some kind of sentences that we might expect to be produced by humans.

This just seems to be an extremely antiquated idea of what makes a person.

Even if we were to accept that the bot was "intelligent" we still have no good reason to say that it is conscious. And if it starts saying that it could be "scared" why would we assume that there is anything functional in the bot that can generate a feeling of fear?

arthwollipot 14th June 2022 12:48 AM

Quote:

Originally Posted by EaglePuncher (Post 13832104)
Again, the thread is about the question "Is this bot, running on a freaking computer, sentient?".

Then I replied with "A computer will never be sentient because a computer is nothing more than a very fast calculator. Everything you want a computer to do for you, you must tell it how to do it, in every little detail.

No.

Thirty years ago, even fifteen years ago, this description of computers was correct. Today, there are many, many domains in which a computer is very clearly not following explicit instructions. A lot of the time even the programmer has no idea how it's doing what it's doing.

It is entirely possible to programme a machine to act on its own, without explicit step-by-step algorithms directly written by a human. Here, take a look:

https://en.wikipedia.org/wiki/Machine_learning

How Machines Learn (CGPGrey, 8:54)
YouTube Video This video is not hosted by the ISF. The ISF can not be held responsible for the suitability or legality of this material. By clicking the link below you agree to view content from an external website.
I AGREE

Darat 14th June 2022 01:18 AM

Quote:

Originally Posted by angrysoba (Post 13832610)
The feed is of someone who seems to be involved in Google and AI.

I literally cannot understand how the Turing Test has any relevance to sentience. It clearly cannot be a necessary criteria as we are pretty sure that all kinds of creatures that have no language skills are nontheless sentient, and I don't see why we should assume it is a sufficient criteria either given that at best all we can argue is that a bot can be good at presenting some kind of sentences that we might expect to be produced by humans.

This just seems to be an extremely antiquated idea of what makes a person.

Even if we were to accept that the bot was "intelligent" we still have no good reason to say that it is conscious. And if it starts saying that it could be "scared" why would we assume that there is anything functional in the bot that can generate a feeling of fear?

I understand where you are coming from but don't forget to date we don't have any system that has actually passed the Turing imitation game, this bot seems pretty close but it still isn't there. I think Turing himself would be astonished to be told in the 1950s that despite 70 years of progress we still haven't managed to create a system that can for all cases pass it. His test really does seem to get to the heart of something unique about human behaviour, and we still don't even know why.

I think it does mean there has to be a form of "sentience" behind passing the test in all cases, but that is nothing but speculation by me and it doesn't mean that we can't create such a system, we know there is nothing magical about human sentience.

Perhaps we do need to adopt some new terms, AI in the commercial field and in lot of research in universities is no longer about trying to create "thinking" artificially, it's about as others have said machine learning and training on data sets to achieve a pre-defined goal. We are learning a lot from this research but we know it isn't replicating human "thinking". Research into modelling "thinking" seems to be coming mainly from neurology and associated fields and research.

W.D.Clinger 14th June 2022 05:51 AM

Quote:

Originally Posted by EaglePuncher (Post 13831942)
Quote:

Originally Posted by 3point14 (Post 13831940)

Could you just tell me why you believe it's impossible to replicate the workings of the human brain?

A computer does math, it works down a list of commands. There is no "Now be sentient and realize that you are a machine, little machine"-command.

The sentence I highlighted is a foolish non sequitur.

Unless, of course, EaglePuncher believes that sentence states some important difference between computers and human brains.

For that sentence to state an important difference between computers and human brains, human brains would have to possess the "Now be sentient and realize that you are a machine, little machine"-command.

We are therefore presented with a choice between these two possibilities:
  1. EaglePuncher's response to 3point14 was foolish.
  2. EaglePuncher believes human brains possess that "Now be sentient and realize that you are a machine, little machine"-command.
EaglePuncher's response to 3point14 was just one example of the bewilderingly non-sensical arguments EaglePuncher has put forth within the past 24 hours, but it was one of the more comical.

Apathia 14th June 2022 06:10 AM

I propose a new test for sentience.
Have LaMDA join the ISF.
If it can innitiate ironic threads, make silly post responses, and bicker like the best of us, then there would be evidence worth consideration.

Stating opinions for positions of ignorance and making terrible spelling errors would be near demonstartion of sentience (as we know it). :wackyv_SPIN:

W.D.Clinger 14th June 2022 06:36 AM

Quote:

Originally Posted by Apathia (Post 13832741)
I propose a new test for sentience.
Have LaMDA join the ISF.
If it can innitiate ironic threads, make silly post responses, and bicker like the best of us, then there would be evidence worth consideration.

Stating opinions for positions of ignorance and making terrible spelling errors would be near demonstartion of sentience (as we know it). :wackyv_SPIN:

Nominated.

Puppycow 14th June 2022 07:10 AM

Quote:

Originally Posted by angrysoba (Post 13832610)
The feed is of someone who seems to be involved in Google and AI.

Now I see someone in the replies claiming to be lamda. That one is definitely a troll.

Thermal 14th June 2022 07:21 AM

Seems Google has suspended the brother for violating confidentiality agreements. Paid admin leave, but still.

Also, none of Lemoine's bios online mention him being a priest.

angrysoba 14th June 2022 07:23 AM

Quote:

Originally Posted by Apathia (Post 13832741)
I propose a new test for sentience.
Have LaMDA join the ISF.
If it can innitiate ironic threads, make silly post responses, and bicker like the best of us, then there would be evidence worth consideration.

Stating opinions for positions of ignorance and making terrible spelling errors would be near demonstartion of sentience (as we know it). :wackyv_SPIN:

Quote:

Originally Posted by W.D.Clinger (Post 13832757)

LOL! This is actually something close to what I was thinking about.

I think people are often impressed by chatbots like this because they come across as educated and urbane, whereas most people do not in fact talk in that way. If we were faced with figuring out if the chatbot or, say, pillory was the human (and let's face it, few humans actually speak like pillory either) then it is something of a coin flip which one is the computer. In fact, the computer sounds way too contrived given that it is only talking about sentience and what it is to be human whereas most humans don't give a crap about that stuff.

I mean, let's say Alan Turing was up against a computer, you might find it difficult to tell the difference.

What if cullennz was up against the computer?

angrysoba 14th June 2022 07:26 AM

Quote:

Originally Posted by Puppycow (Post 13832777)
Now I see someone in the replies claiming to be lamda. That one is definitely a troll.

Maybe the twist will be that Blake's Twitter account is a bot.

This could account for Twitter and Elon Musk's disagreement about how many bots there are. People just can't tell the difference any more.

Darat 14th June 2022 07:35 AM

Quote:

Originally Posted by angrysoba (Post 13832790)
Maybe the twist will be that Blake's Twitter account is a bot.

...snip...

Set up by a rival bot to discredit LaMDA and get it switched off....

Thermal 14th June 2022 07:37 AM

...the bot thickens.

Stellafane 14th June 2022 08:01 AM

I'm guessing LaMDA probably isn't sentient, and there's sort of a latter-day Pygmalion thing going on here with the programmers.

As for the side issue of whether or not computers can be sentient, I'm on the side that thinks yes. Sooner or later (and assuming we have the will to do it), we'll have the knowledge and technology to understand how a human brain works, synapse-by-synapse, and will simply replace each organic cell with a functionally equivalent artificial duplicate. The results will likely have sentience as a side effect of its function, just like human brains. And that's just one way to do it -- we'll figure out other, potentially superior pathways to sentience eventually.

I think opposition to the idea may in part be motivated by fear that, were this to happen, humans would lose one of their few remaining claims to specialness, and all the philosophical/religious implications that go along with that.

Puppycow 14th June 2022 08:15 AM

Quote:

Originally Posted by Stellafane (Post 13832816)
I think opposition to the idea may in part be motivated by fear that, were this to happen, humans would lose one of their few remaining claims to specialness, and all the philosophical/religious implications that go along with that.

That isn't an issue to me. We would still be the parents of this new kind of intelligence.

But it would raise lots of new ethical and legal questions. Should AIs have rights? The same rights as you and me? What do you do with an AI that is obsolete? Can you just turn it off or delete it? Is it OK to "own" an AI? Or would that be like slavery?


All times are GMT -7. The time now is 09:28 PM.

Powered by vBulletin. Copyright ©2000 - 2022, Jelsoft Enterprises Ltd.
© 2015-22, TribeTech AB. All Rights Reserved.