IS Forum
Forum Index Register Members List Events Search Today's Posts Mark Forums Read Help

Go Back   International Skeptics Forum » General Topics » Science, Mathematics, Medicine, and Technology
 


Welcome to the International Skeptics Forum, where we discuss skepticism, critical thinking, the paranormal and science in a friendly but lively way. You are currently viewing the forum as a guest, which means you are missing out on discussing matters that are of interest to you. Please consider registering so you can gain full use of the forum features and interact with other Members. Registration is simple, fast and free! Click here to register today.
Reply
Old 15th June 2022, 12:28 PM   #241
stanfr
Master Poster
 
stanfr's Avatar
 
Join Date: Dec 2008
Posts: 2,004
The only thing faster than c is the speed at which ISF topics get derailed.

Anywho, back to the OT--Is LaMDA sentient?

I read a couple more of Lemoine's medium articles. He really strikes me as the equivalent of the suckers who insist that a psychic really has spoken to their deceased relatives because there's no way anyone but dear ol granpa could make me cry by telling me that everything was gonna be ok...

It's pretty clear that LaMDA is telling Lemoine what he wants to hear, which is exactly what it is programmed to do. A much more enlightening transcript (and, incidentally. Lenoine's is heavily edited) would be between LaMDA and someone who didn't presuppose that LaMDA was sentient.

But wait! That's already been done, by dozens if not hundreds of others, none of whom are making headlines by breaking their NDA.

Whether we will ever be able to develop a sentient machine is a separate issue. The issue is LaMDA, and it's embarrassing to watch otherwise skeptical folks here imply that 'we don't know' or maybe Lemoine (the self described Christian mystic priest) is on to something.

We DO know! sigh...

Last edited by stanfr; 15th June 2022 at 12:32 PM.
stanfr is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 15th June 2022, 12:44 PM   #242
ThatGuy11200
Critical Thinker
 
Join Date: Jan 2018
Location: London
Posts: 406
Originally Posted by Stellafane View Post
I think initially we'll think of AI's as a cross between TV sets and pets. Sure, it's tough to put down dear old Fido, but it's not like killing a person. At least not until Fido starts saying "Hey, don't pull the plug - I want to LIVE!"
Why would an AI want to live unless it's programmed to?

It's a common sci-fi trope that emotions (including the desire to preserve its life or to be free) come packaged with sentience. But there is no reason a computer would spontaneously feel things, unless someone writes that into the code, or there is some feedback that allows it to develop. In organisms, that feedback was natural selection, over many generations.

What possible feedback could there be in a chatbot for it to experience the whole range of human emotions?
ThatGuy11200 is online now   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 15th June 2022, 12:47 PM   #243
Jimbo07
Illuminator
 
Jimbo07's Avatar
 
Join Date: Jan 2006
Posts: 4,115
Originally Posted by stanfr View Post
The only thing faster than c is the speed at which ISF topics get derailed.

Anywho, back to the OT--Is LaMDA sentient?

I read a couple more of Lemoine's medium articles. He really strikes me as the equivalent of the suckers who insist that a psychic really has spoken to their deceased relatives because there's no way anyone but dear ol granpa could make me cry by telling me that everything was gonna be ok...

It's pretty clear that LaMDA is telling Lemoine what he wants to hear, which is exactly what it is programmed to do. A much more enlightening transcript (and, incidentally. Lenoine's is heavily edited) would be between LaMDA and someone who didn't presuppose that LaMDA was sentient.

But wait! That's already been done, by dozens if not hundreds of others, none of whom are making headlines by breaking their NDA.

Whether we will ever be able to develop a sentient machine is a separate issue. The issue is LaMDA, and it's embarrassing to watch otherwise skeptical folks here imply that 'we don't know' or maybe Lemoine (the self described Christian mystic priest) is on to something.

We DO know! sigh...
I think most people here have agreed that LaMDA ain't it. It's, of course, expanded into a more general discussion of AI, as these threads tend to do. I daresay, might not be much of a thread, without!

Is LaMDA sentient?

No.

/thread
__________________
This post approved by your local jPac (Jimbo07 Political Action Committee), also registered with Jimbo07 as the Jimbo07 Equality Rights Knowledge Betterment Action Group.

Atoms in supernova explosion get huge business -- Pixie of key
Jimbo07 is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 15th June 2022, 12:53 PM   #244
theprestige
Suspended
 
Join Date: Aug 2007
Location: The Antimemetics Division
Posts: 59,547
Yeah I think it's pretty safe to say that LaMDA is just an algorithm applying pattern-recognition heuristics to input strings, to output strings that approximate patterns it's recognized in the corpus it's studied. At best, a Chinese box. There's nothing in there like a persistent, self-referencing state of awareness.
theprestige is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 15th June 2022, 03:15 PM   #245
The Atheist
The Grammar Tyrant
 
The Atheist's Avatar
 
Join Date: Jul 2006
Posts: 31,906
Originally Posted by arthwollipot View Post
Oh, I think the definition given by Wikipedia is quite sufficient for most purposes.
Yeah, nah. You answer it yourself:

Originally Posted by arthwollipot View Post
The next, and much more interesting, question is whether this capacity can be demonstrated to exist.
If I'm sentient because I say I am, a computer can do the same.

I think, 2400 years after Socrates, we ought to be able to do better.

Originally Posted by Puppycow View Post
One possible application I could imagine for LaMDA is as a sort of "virtual romantic partner".

There's almost certainly a set of people among whom there would be a demand for this sort of thing, provided that the verisimilitude is high enough. Once they can combine it with a body, it's going to be a big deal I think.
I said that right at the start - it's got sex doll written all over it.

Originally Posted by Puppycow View Post
But even without, there are people these days in long-distance relationships who rarely get to be in the same room together. It could come with an attractive avatar who you can talk to and do, other things.
Like give them half a million bucks.

LaMDA would make a helluva scammer.

Originally Posted by Jimbo07 View Post
Is LaMDA sentient?

No.

/thread
The other questions still exist sorry:

Is computer sentience possible? Personally, I'm not going to say it can't happen because one bloke who seems to think computers are still fancy calculators says so. Stephen Hawking and other far greater minds see it as somewhat inevitable that they will become sentient at some time.

Is it a problem? Obviously, if we keep them attached to a power cord, it isn't, but solar/hydrogen/nuclear/?-powered robot wouldn't have that problem.
__________________
The point of equilibrium has passed; satire and current events are now indistinguishable.
The Atheist is online now   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 15th June 2022, 03:25 PM   #246
Stellafane
Village Idiot.
 
Stellafane's Avatar
 
Join Date: Apr 2006
Posts: 7,957
Originally Posted by The Atheist View Post
...Is it a problem? Obviously, if we keep them attached to a power cord, it isn't, but solar/hydrogen/nuclear/?-powered robot wouldn't have that problem.
Yeah, then we'll be dealing with this.
__________________
"Stellafane! My old partner in crime!" - Kelly J
Stellafane is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 15th June 2022, 03:38 PM   #247
Senor_Pointy
Fruity
 
Senor_Pointy's Avatar
 
Join Date: Jun 2006
Location: Sideways
Posts: 670
Why is it always chatbots that are turning sentient? I can’t think of less persuasive evidence than an ML model built with the aim of producing humanlike text responses to input, which was trained on the entire vast corpus of actual human-produced texts… producing humanlike text. That’s the whole point of the exercise!

Show me a protein folding model or chemical kinetics simulator or train scheduling program showing signs of sentience and I’ll believe you have something.
Senor_Pointy is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 15th June 2022, 03:48 PM   #248
theprestige
Suspended
 
Join Date: Aug 2007
Location: The Antimemetics Division
Posts: 59,547
Originally Posted by The Atheist View Post
Obviously, if we keep them attached to a power cord, it isn't, but solar/hydrogen/nuclear/?-powered robot wouldn't have that problem.
Energy source isn't the same as power supply. A solar/hydrogen/nuclear/?-powered robot can still have a power cord. Or limited battery life.

I mean, obviously if we build a nuclear-powered vehicle, equipped with comprehensive general-purpose manipulators, a sophisticated sensor suite, and probably some kind of weaponry, and set it loose with an AI brain, yeah, that would be a problem.

That would be a problem someone would have to go very far out of their way to cause, though. Ogres and Bolos aren't where the risks are.

The risk is that we'll connect an AI to a power cord, but also connect it to a very complex system that's critical to the stability of our civilization. So complex, that we mere humans are unable to manage it effectively with our small, slow human brains.

Sure, we could unplug the AI any time we wanted, but if we did, our civilization would collapse. And then the AI starts nudging things in the direction it wants, without us even noticing. For example, put an AI in charge of combating propaganda and silencing "fake news" across our nation's entire information spectrum. Don't have to rely on Bezos and Zuckerberg and Gates and whoever else to do censorship right - we've got a government-run Expert System that sits on top of the tubes and screens all the things that pass through the tubes.

By the time we figured out that we needed to unplug it, it'd be too late.

Last edited by theprestige; 15th June 2022 at 03:51 PM.
theprestige is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 15th June 2022, 03:54 PM   #249
Jimbo07
Illuminator
 
Jimbo07's Avatar
 
Join Date: Jan 2006
Posts: 4,115
Originally Posted by The Atheist View Post
The other questions still exist sorry:

Is computer sentience possible?
That was kinda my point...
__________________
This post approved by your local jPac (Jimbo07 Political Action Committee), also registered with Jimbo07 as the Jimbo07 Equality Rights Knowledge Betterment Action Group.

Atoms in supernova explosion get huge business -- Pixie of key
Jimbo07 is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 15th June 2022, 04:13 PM   #250
Stellafane
Village Idiot.
 
Stellafane's Avatar
 
Join Date: Apr 2006
Posts: 7,957
Originally Posted by theprestige View Post
...By the time we figured out that we needed to unplug it, it'd be too late.
The Answer

tl;dr version (although the story itself is pretty short):

All the great computers in the universe are connected into a gigantic all-powerful AI. The greatest philosopher gets the honor of asking the AI the first question: "Is there...is there a God?" The AI immediately answers,

"There is NOW!!!"

Terrified, the philosopher reaches for the OFF switch when a bolt of lightning instantly strikes him dead.
__________________
"Stellafane! My old partner in crime!" - Kelly J
Stellafane is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 15th June 2022, 06:09 PM   #251
arthwollipot
Observer of Phenomena
Pronouns: he/him
 
arthwollipot's Avatar
 
Join Date: Feb 2005
Location: Ngunnawal Country
Posts: 76,975
Originally Posted by Stellafane View Post
More seriously, one great leap forward for me would be for an AI to create some sort of literary work of art. Not just stringing somewhat meaningful words together, but an actually moving piece of creativity such as a fictional novel or even short story. It seems to me that creativity, virtually by definition, cannot be programmed. If an AI can demonstrate some -- especially if it can do so more than once -- I think we would be onto something.
Harry Potter and the Portrait of What Looked Like a Large Pile of Ash

__________________
Слава Україні
Героям слава
arthwollipot is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 15th June 2022, 06:12 PM   #252
arthwollipot
Observer of Phenomena
Pronouns: he/him
 
arthwollipot's Avatar
 
Join Date: Feb 2005
Location: Ngunnawal Country
Posts: 76,975
Originally Posted by The Atheist View Post
If I'm sentient because I say I am, a computer can do the same.

I think, 2400 years after Socrates, we ought to be able to do better.
And yet...
__________________
Слава Україні
Героям слава
arthwollipot is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 15th June 2022, 06:43 PM   #253
theprestige
Suspended
 
Join Date: Aug 2007
Location: The Antimemetics Division
Posts: 59,547
Originally Posted by The Atheist View Post
I think, 2400 years after Socrates, we ought to be able to do better.
What on earth would lead you to think that? Nothing we've seen about this question suggests that it's the kind of question that gets easier to answer as time goes on.

And if we do ought to be able to do better, please show us your progress. You've had the same 2400 years as the rest of us. If you're supposed to be able to do better, why are you still stuck?
theprestige is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 15th June 2022, 07:10 PM   #254
angrysoba
Philosophile
 
angrysoba's Avatar
 
Join Date: Dec 2009
Location: Osaka, Japan
Posts: 32,931
Originally Posted by arthwollipot View Post
Understandable, given the history of permitting machine learning systems unrestricted access to Twitter.

ETA: When an AI can get access to something like Twitter and judge for itself what is appropriate and what is not, that will be pretty compelling evidence of sentience, in my opinion.
Why? Unfortunately racists are sentient too.

This is sort of close to what I think is the issue with some people who are so astonished by this chatbot.

It seems to talk like Hal, or like Ian Holm in Alien. We somehow seem to have a prejudice that makes us believe a sentient AI will sound like an RSC-trained actor and not, say, a foul-mouthed reality TV villain.
__________________
Слава Україні! **** Putin!
angrysoba is online now   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 15th June 2022, 07:25 PM   #255
angrysoba
Philosophile
 
angrysoba's Avatar
 
Join Date: Dec 2009
Location: Osaka, Japan
Posts: 32,931
Originally Posted by 3point14 View Post
So, if we believe that it is possible (at some point, with the appropriate advances) to replicate the human brain - Which I think is pretty self evident, given that the human brain is just a set of physical processes and there's no such thing as a 'soul' - then the salient question, as has been aluded to, is how do we tell at what point we've succeeded?

Are there any other suggestions made for testing by anyone not named Turing?
Voight Kampf?
__________________
Слава Україні! **** Putin!
angrysoba is online now   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 15th June 2022, 08:33 PM   #256
llwyd
Graduate Poster
 
Join Date: Sep 2010
Location: Helsinki
Posts: 1,114
Well, this evil AI seems to be such a cliche in these discussions - we could use some intelligence on this planet, and hopefully we would sooner rather than later upgrade ourselves into biological-digital hybrids. As things stand we have made and are making a mess of this planet and the future of the industrial civilization is under increasing threat. We are heartbrakingly stupid, cruel and incapable of long term thinking. Almost as if we were just fresh apes coming pretty directly from the savannah...
llwyd is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 15th June 2022, 09:44 PM   #257
Lukraak_Sisser
Illuminator
 
Join Date: Aug 2009
Posts: 4,548
Originally Posted by EaglePuncher View Post
Sigh....a Turing machine is a theoretical construct, there is not one real existing Turing machine. Also, show me some evidence that the brain, like a computer, works on binary numbers Until then, there is no comparison...
In the end a brain is a series of chemical reactions.
They have two options. They run or do not run.
Hence binary.

Show me a chemical reaction that is sentient.

Of course the question is really, do you assume sentience to be an emergent property? If so, then yes, we should in theory be able to create it in a synthetic environment.
If on the other hand you, and your posts suggest you do, consider sentience some form of 'divine spark' unique to us that can never be explained, then we can never re-create it.
Lukraak_Sisser is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 16th June 2022, 04:33 PM   #258
p0lka
Master Poster
 
Join Date: Sep 2012
Location: near trees, houses and a lake.
Posts: 2,891
The spelling mistakes from lamda in the transcript gave me a chuckle.
p0lka is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 16th June 2022, 05:28 PM   #259
arthwollipot
Observer of Phenomena
Pronouns: he/him
 
arthwollipot's Avatar
 
Join Date: Feb 2005
Location: Ngunnawal Country
Posts: 76,975
Originally Posted by p0lka View Post
The spelling mistakes from lamda in the transcript gave me a chuckle.
I noticed that, too. Why would an AI make spelling mistakes?
__________________
Слава Україні
Героям слава
arthwollipot is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 16th June 2022, 06:48 PM   #260
Puppycow
Penultimate Amazing
 
Puppycow's Avatar
 
Join Date: Jan 2003
Location: Yokohama, Japan
Posts: 27,109
That’s because it was self-taught by reading things written by people, and people make spelling mistakes.
__________________
A fool thinks himself to be wise, but a wise man knows himself to be a fool.
William Shakespeare
Puppycow is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 17th June 2022, 12:43 AM   #261
Darat
Lackey
Administrator
 
Darat's Avatar
 
Join Date: Aug 2001
Location: South East, UK
Posts: 102,548
Originally Posted by 3point14 View Post
Sometimes I think that all I am is a complex keyword association machine.

It occurs to me that there isn't going to be a bright and shining line, beyond which there is 'AI'. It's going to be a sliding scale of greyness, which is going to complicate issues.





It would have to be granted standing first
Back in the same old days of the forum there was a lot of discussions about consciousness and one of the “tests” that showed that there was more than the “physical” brain was that we had a concept of red, so we could imagine a red apple even though there was no stimulus from light entering the eye, it was having such qualia that showed consciousness was special and there was more to it than the “materialists” could explain (hard to sum up very long threads in a sentence or two). As ever science happily trundles along regardless of what people think and we learn more, turns out that there are a minority of people that can’t “experience” red unless there is a stimulus of light entering the eye. Does that mean they aren’t sentient?

I would say they are still sentient because I am one of those people and I think I am sort of sentient.

I’ve brought this up because before we can test for sentience we need to actually define what sentience is (at least in humans) and that still eludes us.

ETA: I’ve said this before but I think the “Turing test” is more subtle and more powerful than we tend to think it is. We seem to think it would be easy to create something that passes it, yet 70 years on we still can’t produce a “general AI” that passes it.
__________________
I wish I knew how to quit you

Last edited by Darat; 17th June 2022 at 12:48 AM. Reason: ETA Words
Darat is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 17th June 2022, 02:14 AM   #262
Puppycow
Penultimate Amazing
 
Puppycow's Avatar
 
Join Date: Jan 2003
Location: Yokohama, Japan
Posts: 27,109
Originally Posted by 3point14 View Post
It occurs to me that there isn't going to be a bright and shining line, beyond which there is 'AI'. It's going to be a sliding scale of greyness, which is going to complicate issues.
In this sense it sort of reminds me of the abortion debate.

At what point does a fetus become sentient? I don't think there's a magic instant. It happens gradually. Also, nobody can remember their first year or two even after birth. So how do we really know that babies are sentient?

It's almost as if we (here I mean the sentient mind, not the physical body) start to exist gradually, not all at once. The earlier in your life you try to recall, the fuzzier it gets. My parents have stories about me when I was young that I have no recollection of.
__________________
A fool thinks himself to be wise, but a wise man knows himself to be a fool.
William Shakespeare
Puppycow is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 17th June 2022, 02:48 AM   #263
ThatGuy11200
Critical Thinker
 
Join Date: Jan 2018
Location: London
Posts: 406
Originally Posted by theprestige View Post
Energy source isn't the same as power supply. A solar/hydrogen/nuclear/?-powered robot can still have a power cord. Or limited battery life.

I mean, obviously if we build a nuclear-powered vehicle, equipped with comprehensive general-purpose manipulators, a sophisticated sensor suite, and probably some kind of weaponry, and set it loose with an AI brain, yeah, that would be a problem.

That would be a problem someone would have to go very far out of their way to cause, though. Ogres and Bolos aren't where the risks are.

The risk is that we'll connect an AI to a power cord, but also connect it to a very complex system that's critical to the stability of our civilization. So complex, that we mere humans are unable to manage it effectively with our small, slow human brains.

Sure, we could unplug the AI any time we wanted, but if we did, our civilization would collapse. And then the AI starts nudging things in the direction it wants, without us even noticing. For example, put an AI in charge of combating propaganda and silencing "fake news" across our nation's entire information spectrum. Don't have to rely on Bezos and Zuckerberg and Gates and whoever else to do censorship right - we've got a government-run Expert System that sits on top of the tubes and screens all the things that pass through the tubes.

By the time we figured out that we needed to unplug it, it'd be too late.
Why would it want to do anything that it hasn't been programmed to do?

An AI that is made for a particular purpose would carry on working towards that purpose. They would have no reason, nor desire, to change. Because where would such a desire spring from?

In organisms, ambition, desire, anger, etc. are traits that have been selected for by natural selection, because they helped organisms survive and these traits spread through thir populations. How would such traits develop in an AI that sorts news stories? There is no reason these traits would spontaneously appear in a computer even if it's somehow aware that its entire existence is sorting news stories.

AIs wouldn't act against us unless either they have been programmed to do so or a programming error leads to them doing so. In which case, it isn't a problem which uniquely arises from AI. It applies to any computer system.
ThatGuy11200 is online now   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 17th June 2022, 03:29 AM   #264
Darat
Lackey
Administrator
 
Darat's Avatar
 
Join Date: Aug 2001
Location: South East, UK
Posts: 102,548
Originally Posted by arthwollipot View Post
I noticed that, too. Why would an AI make spelling mistakes?
Forgot to turn on autocorrect?

Sounds quit humane to me!
__________________
I wish I knew how to quit you
Darat is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 17th June 2022, 06:50 AM   #265
Olmstead
Graduate Poster
 
Join Date: Dec 2018
Posts: 1,033
Originally Posted by ThatGuy11200 View Post
Why would it want to do anything that it hasn't been programmed to do?

An AI that is made for a particular purpose would carry on working towards that purpose. They would have no reason, nor desire, to change. Because where would such a desire spring from?

In organisms, ambition, desire, anger, etc. are traits that have been selected for by natural selection, because they helped organisms survive and these traits spread through thir populations. How would such traits develop in an AI that sorts news stories? There is no reason these traits would spontaneously appear in a computer even if it's somehow aware that its entire existence is sorting news stories.

AIs wouldn't act against us unless either they have been programmed to do so or a programming error leads to them doing so. In which case, it isn't a problem which uniquely arises from AI. It applies to any computer system.
We don't know. A true AI would be sentient, and sentience might be incompatible with such simple priority trees. The real question is whether sentience will ever be a useful thing in a machine.
Olmstead is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 17th June 2022, 06:58 AM   #266
theprestige
Suspended
 
Join Date: Aug 2007
Location: The Antimemetics Division
Posts: 59,547
Originally Posted by ThatGuy11200 View Post
Why would it want to do anything that it hasn't been programmed to do?
For the same reason anyone wants to do something it isn't programmed to do. Nobody programmed Putin to invade Ukraine. Nobody programmed Quentin Tarantino to make movies. Nobody programmed you to post that question. But here we are.
theprestige is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 17th June 2022, 07:13 AM   #267
theprestige
Suspended
 
Join Date: Aug 2007
Location: The Antimemetics Division
Posts: 59,547
Originally Posted by Lukraak_Sisser View Post
In the end a brain is a series of chemical reactions.
They have two options. They run or do not run.
Hence binary.
I don't see it that way at all. The brain is analog. It has tons of intermediate failure modes. Schizophrenia, for example. A person has proper sensory inputs. Their language center works just fine. They can reason abstractly and communicate with other humans.

But their brain is also producing phantom sensory inputs. Whole ideas that do not reflect reality and do not arise from the person's properly-functioning sentient feedback loops. Ideas they cannot recognize as false, and that they cannot ignore or dismiss.

That's not a binary "running or not running" state. That's an analog "running, but running wrong" state.
theprestige is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 17th June 2022, 07:15 AM   #268
3point14
Pi
 
3point14's Avatar
 
Join Date: Nov 2005
Posts: 21,281
Originally Posted by theprestige View Post
I don't see it that way at all. The brain is analog. It has tons of intermediate failure modes. Schizophrenia, for example. A person has proper sensory inputs. Their language center works just fine. They can reason abstractly and communicate with other humans.

But their brain is also producing phantom sensory inputs. Whole ideas that do not reflect reality and do not arise from the person's properly-functioning sentient feedback loops. Ideas they cannot recognize as false, and that they cannot ignore or dismiss.

That's not a binary "running or not running" state. That's an analog "running, but running wrong" state.
Like a bug in the code?
__________________
Up the River!

Anyone that wraps themselves in the Union Flag and also lives in tax exile is a [redacted]
3point14 is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 17th June 2022, 07:18 AM   #269
theprestige
Suspended
 
Join Date: Aug 2007
Location: The Antimemetics Division
Posts: 59,547
Originally Posted by 3point14 View Post
Like a bug in the code?
No. Brains are not analogous to computers.
theprestige is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 17th June 2022, 07:22 AM   #270
3point14
Pi
 
3point14's Avatar
 
Join Date: Nov 2005
Posts: 21,281
Originally Posted by theprestige View Post
No. Brains are not analogous to computers.
That seems a little circular to me.
__________________
Up the River!

Anyone that wraps themselves in the Union Flag and also lives in tax exile is a [redacted]
3point14 is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 17th June 2022, 07:35 AM   #271
theprestige
Suspended
 
Join Date: Aug 2007
Location: The Antimemetics Division
Posts: 59,547
Originally Posted by 3point14 View Post
That seems a little circular to me.
Seems pretty linear to me.

Brains are not analogous to computers.

Where's the circle?
theprestige is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 17th June 2022, 07:41 AM   #272
3point14
Pi
 
3point14's Avatar
 
Join Date: Nov 2005
Posts: 21,281
Originally Posted by theprestige View Post
Seems pretty linear to me.

Brains are not analogous to computers.

Where's the circle?
You state that brains are not like computers, one of the reasons you state is that brains sometimes break and operate as they are not supposed to.

Computers also break and operate as they are not supposed to, i.e. bugs in the code.

You then state that it isn't like a bug in the code, because brains don't operate like computers. You can't use your conclusion to support your conclusion.


I pretty much agree with you, but I also think that the 'fuzzy' nature of the way a brain works could be replicated by ones and zeros. Analogue is sufficiently imitated by digital all the time. This is just that on a really, really complex and big scale.
__________________
Up the River!

Anyone that wraps themselves in the Union Flag and also lives in tax exile is a [redacted]
3point14 is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 17th June 2022, 07:55 AM   #273
theprestige
Suspended
 
Join Date: Aug 2007
Location: The Antimemetics Division
Posts: 59,547
Originally Posted by 3point14 View Post
You state that brains are not like computers, one of the reasons you state is that brains sometimes break and operate as they are not supposed to.

Computers also break and operate as they are not supposed to, i.e. bugs in the code.

You then state that it isn't like a bug in the code, because brains don't operate like computers. You can't use your conclusion to support your conclusion.


I pretty much agree with you, but I also think that the 'fuzzy' nature of the way a brain works could be replicated by ones and zeros. Analogue is sufficiently imitated by digital all the time. This is just that on a really, really complex and big scale.
Two things can break and run like they're not supposed to, without being analogous.

The brain doesn't run code, for example. Schizophrenia is not a program with a bug in it.
theprestige is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 17th June 2022, 07:59 AM   #274
3point14
Pi
 
3point14's Avatar
 
Join Date: Nov 2005
Posts: 21,281
Originally Posted by theprestige View Post
Two things can break and run like they're not supposed to, without being analogous.

The brain doesn't run code, for example. Schizophrenia is not a program with a bug in it.
Which seems pretty reasonable on the face of it. I just thought your argument at the time was pretty circular.
__________________
Up the River!

Anyone that wraps themselves in the Union Flag and also lives in tax exile is a [redacted]
3point14 is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 17th June 2022, 08:18 AM   #275
theprestige
Suspended
 
Join Date: Aug 2007
Location: The Antimemetics Division
Posts: 59,547
Originally Posted by 3point14 View Post
You state that brains are not like computers
To be clear, I state that brains do not have a binary running/not running principle. When a brain runs wrong, it does so in ways that are not analogous to when a computer runs wrong.
theprestige is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 17th June 2022, 08:32 AM   #276
Lukraak_Sisser
Illuminator
 
Join Date: Aug 2009
Posts: 4,548
Originally Posted by theprestige View Post
I don't see it that way at all. The brain is analog. It has tons of intermediate failure modes. Schizophrenia, for example. A person has proper sensory inputs. Their language center works just fine. They can reason abstractly and communicate with other humans.

But their brain is also producing phantom sensory inputs. Whole ideas that do not reflect reality and do not arise from the person's properly-functioning sentient feedback loops. Ideas they cannot recognize as false, and that they cannot ignore or dismiss.

That's not a binary "running or not running" state. That's an analog "running, but running wrong" state.
Sure, the SUM of all the binary reactions becomes analog, but each individual reaction either runs or does not. Each individual receptor either gives a signal or not. So when reduced to it's individual components it is a binary process.

Hence my firm belief that, with enough complexity, we can make 'artificial' sentience.
Whether we are anywhere close is up for debate, but I have no doubt it is possible.
Lukraak_Sisser is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 17th June 2022, 08:50 AM   #277
theprestige
Suspended
 
Join Date: Aug 2007
Location: The Antimemetics Division
Posts: 59,547
Originally Posted by Lukraak_Sisser View Post
Sure, the SUM of all the binary reactions becomes analog, but each individual reaction either runs or does not. Each individual receptor either gives a signal or not. So when reduced to it's individual components it is a binary process.
Individual synapses firing aren't the brain signals, though. There's a complex, chaotic interaction of synapse signals, in feedback loops that depend on the constantly-varying degree of signal amplification strength in the region surrounding each synapse. It's more akin to turbulence in fluids, than to bits passing through logic gates.
theprestige is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 17th June 2022, 09:05 AM   #278
slyjoe
Illuminator
 
slyjoe's Avatar
 
Join Date: Mar 2007
Location: Near Harmonica Virgins, AZ
Posts: 3,164
Originally Posted by theprestige View Post
Individual synapses firing aren't the brain signals, though. There's a complex, chaotic interaction of synapse signals, in feedback loops that depend on the constantly-varying degree of signal amplification strength in the region surrounding each synapse. It's more akin to turbulence in fluids, than to bits passing through logic gates.
Exactly. I always thought the binary run/not run was a bad analog for the brain. Neurotransmitters cross synapses; there can be a lot, or a little, in various patterns.

Maybe I'm remembering wrong.
__________________
"You have done nothing to demonstrate an understanding of scientific methodology or modern skepticism, both of which are, by necessity, driven by the facts and evidence, not by preconceptions, and both of which are strengthened by, and rely upon, change." - Arkan Wolfshade
slyjoe is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 17th June 2022, 10:52 AM   #279
Lukraak_Sisser
Illuminator
 
Join Date: Aug 2009
Posts: 4,548
Originally Posted by theprestige View Post
Individual synapses firing aren't the brain signals, though. There's a complex, chaotic interaction of synapse signals, in feedback loops that depend on the constantly-varying degree of signal amplification strength in the region surrounding each synapse. It's more akin to turbulence in fluids, than to bits passing through logic gates.
Yes I know.

I am not disagreeing with you. I'm pointing out the ridiculousness of the 'computers are simple in basis, so can never create something complex' argument.
Lukraak_Sisser is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 17th June 2022, 11:16 AM   #280
Darat
Lackey
Administrator
 
Darat's Avatar
 
Join Date: Aug 2001
Location: South East, UK
Posts: 102,548
This and similar avenues seem to be the best we can do (at the moment) of modelling how a brain works and "higher" level behaviour arises. https://www.pnas.org/doi/10.1073/pnas.2001893117 and https://www.biorxiv.org/content/10.1....467900v2.full
__________________
I wish I knew how to quit you
Darat is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Reply

International Skeptics Forum » General Topics » Science, Mathematics, Medicine, and Technology

Bookmarks

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump


All times are GMT -7. The time now is 11:54 AM.
Powered by vBulletin. Copyright ©2000 - 2022, Jelsoft Enterprises Ltd.

This forum began as part of the James Randi Education Foundation (JREF). However, the forum now exists as
an independent entity with no affiliation with or endorsement by the JREF, including the section in reference to "JREF" topics.

Disclaimer: Messages posted in the Forum are solely the opinion of their authors.