IS Forum
Forum Index Register Members List Events Mark Forums Read Help

Go Back   International Skeptics Forum » General Topics » Science, Mathematics, Medicine, and Technology
 


Welcome to the International Skeptics Forum, where we discuss skepticism, critical thinking, the paranormal and science in a friendly but lively way. You are currently viewing the forum as a guest, which means you are missing out on discussing matters that are of interest to you. Please consider registering so you can gain full use of the forum features and interact with other Members. Registration is simple, fast and free! Click here to register today.
Reply
Old 13th September 2021, 06:12 AM   #41
Gord_in_Toronto
Penultimate Amazing
 
Gord_in_Toronto's Avatar
 
Join Date: Jul 2006
Posts: 21,176
Originally Posted by HansMustermann View Post
And that's kinda the whole point: it's just Eliza on steroids.

And it becomes even less impressive when you realize that some people have developed emotional connections even with the dumb old Eliza. And a lot larger a subset were convinced that it has some kind of intelligence that it just provably didn't have.

So, yes, try it on enough people, and you'll find one who's convinced he's talking to his late girlfriend
Ah. But what if a more advanced version passes the Turing Test? Is there some point at which a perfect simulation of intelligence is made but we know that "under the hood" it's "just" programming?
__________________
"Reality is what's left when you cease to believe." Philip K. Dick
Gord_in_Toronto is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 13th September 2021, 06:13 AM   #42
HansMustermann
Penultimate Amazing
 
HansMustermann's Avatar
 
Join Date: Mar 2009
Posts: 19,212
@Mike Helland
But basically, seriously, do you actually know what we're talking about, because the conversation so far (especially messages #37 and #39) points at "not the faintest clue." And it's getting to be about as much a waste of my time as talking about physics to Pixie Of Key.
__________________
Which part of "Ph'nglui mglw'nafh Cthulhu R'lyeh wgah'nagl fhtagn" don't you understand?

Last edited by HansMustermann; 13th September 2021 at 06:17 AM.
HansMustermann is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 13th September 2021, 06:16 AM   #43
HansMustermann
Penultimate Amazing
 
HansMustermann's Avatar
 
Join Date: Mar 2009
Posts: 19,212
Originally Posted by Gord_in_Toronto View Post
Ah. But what if a more advanced version passes the Turing Test? Is there some point at which a perfect simulation of intelligence is made but we know that "under the hood" it's "just" programming?
Well, we HOPE that that's going to be the case one day. Well, or rather it will be the programming AND the data it gathered after all those millions of runs. But anyway, that's the general hope.

Whether it will be the case or not, well, that remains to be seen.

It's kinda like searching for extraterestrial intelligence. We don't see any reason why ET couldn't exist, or why we couldn't find one. But until we actually find one, it's just hope.


Edit: that said, technically for some users, Eliza and some of its variants pretty much passed the Turing test already. As I was saying, there were a few people who got convinced that they're talking to a real person, and some even developed emotional connections to it. One could argue that it was sad, lonely and/or delusional people, but it did happen. Passing it for any random user, though, that's not happened yet AFAIK.
__________________
Which part of "Ph'nglui mglw'nafh Cthulhu R'lyeh wgah'nagl fhtagn" don't you understand?

Last edited by HansMustermann; 13th September 2021 at 06:19 AM.
HansMustermann is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 13th September 2021, 06:17 AM   #44
Wudang
BOFH
 
Wudang's Avatar
 
Join Date: Jun 2003
Location: People's Republic of South Yorkshire
Posts: 13,806
Originally Posted by HansMustermann View Post
And that's kinda the whole point: it's just Eliza on steroids.

Which is why some people prefer to avoid the term "AI" in those contexts and use terms like "machine learning" etc which better reflect its heavy dependency on pattern recognition.



Coincidentally I was listening to this discussion with neurologist Anil Seth who was saying how neuroscience now argues that when we see a bear we don't get a rush fear which triggers adrenalin etc; we get a rush of adrenalin etc and these physiological reactions are then experienced by us as fear.
__________________
"Your deepest pools, like your deepest politicians and philosophers, often turn out more shallow than expected." Walter Scott.
Wudang is online now   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 13th September 2021, 06:21 AM   #45
Mike Helland
Master Poster
 
Join Date: Nov 2020
Posts: 2,206
Originally Posted by HansMustermann View Post
It's statistics of what you tried, and how much of that worked. You can't build those on just the input level. You have to actually run those millions of tries, to produce those statistics.
Can you explain the statistics technique that tells you what your next weights should be?
Mike Helland is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 13th September 2021, 06:24 AM   #46
HansMustermann
Penultimate Amazing
 
HansMustermann's Avatar
 
Join Date: Mar 2009
Posts: 19,212
Originally Posted by Wudang View Post
Which is why some people prefer to avoid the term "AI" in those contexts and use terms like "machine learning" etc which better reflect its heavy dependency on pattern recognition.
Well, I would even agree, but for better or worse, "AI" is the term that everyone uses. It gets hard to discuss anything with most people if you insist on not calling it "AI" when they call it "AI", or not without derailing the talk into a lexical debate.

And I'm at least trying not to trigger the mods with a thread derail. Not saying I always succeed, but I'm trying

Originally Posted by Wudang View Post
Coincidentally I was listening to this discussion with neurologist Anil Seth who was saying how neuroscience now argues that when we see a bear we don't get a rush fear which triggers adrenalin etc; we get a rush of adrenalin etc and these physiological reactions are then experienced by us as fear.
That's what I was getting at when I was talking about those chemical mediators. But yes, you explained it better than I did.
__________________
Which part of "Ph'nglui mglw'nafh Cthulhu R'lyeh wgah'nagl fhtagn" don't you understand?
HansMustermann is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 13th September 2021, 06:30 AM   #47
HansMustermann
Penultimate Amazing
 
HansMustermann's Avatar
 
Join Date: Mar 2009
Posts: 19,212
Originally Posted by Mike Helland View Post
Can you explain the statistics technique that tells you what your next weights should be?
Eeehhh... You probably realize that, I'm not going to write a whole course in machine learning. To cover even the basics, it's stuff that runs at least a semester in a college. Including the fact that you have more than one way to go about those statistics, including Bayesian Optimization, Gaussian, MCMC, and Variational Bayesian. (At a wild guess, it looks like the last one might have been used in that youtube video.)

Luckily there are enough online courses and tutorials online. E.g., https://machinelearningmastery.com/b...hine-learning/
__________________
Which part of "Ph'nglui mglw'nafh Cthulhu R'lyeh wgah'nagl fhtagn" don't you understand?
HansMustermann is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 13th September 2021, 06:34 AM   #48
Mike Helland
Master Poster
 
Join Date: Nov 2020
Posts: 2,206
Originally Posted by HansMustermann View Post
Eeehhh... You probably realize that, I'm not going to write a whole course in machine learning. To cover even the basics, it's stuff that runs at least a semester in a college. Including the fact that you have more than one way to go about those statistics, including Bayesian Optimization, Gaussian, MCMC, and Variational Bayesian. (At a wild guess, it looks like the last one might have been used in that youtube video.)

Luckily there are enough online courses and tutorials online. E.g., https://machinelearningmastery.com/b...hine-learning/
The weights for the next generation are generated with random numbers.

This is what allows the AI to devise solutions that the creators did not intend or expect.
Mike Helland is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 13th September 2021, 06:48 AM   #49
Wudang
BOFH
 
Wudang's Avatar
 
Join Date: Jun 2003
Location: People's Republic of South Yorkshire
Posts: 13,806
Originally Posted by HansMustermann View Post
Well, I would even agree, but for better or worse, "AI" is the term that everyone uses. It gets hard to discuss anything with most people if you insist on not calling it "AI" when they call it "AI", or not without derailing the talk into a lexical debate.

In IT circles, fair enough but when talking to people less familiar with the topic (including many IT people) I think it's important to explain that the "Intelligence" in AI is not what they might think.



Quote:


And I'm at least trying not to trigger the mods with a thread derail. Not saying I always succeed, but I'm trying



That's what I was getting at when I was talking about those chemical mediators. But yes, you explained it better than I did.

In a very narrow sense we have something like that. For a few years I used various products (or wrote them) to monitor servers for a bank and if certain conditions prevailed, whisper a warning or scream loud. But I would never define any of these as anything to do with self awareness. And all the intelligence was from the people who worked out what were important events. It's no more self-awareness than an OS's scheduler deciding which process gets a slice of processor time next.

The only "AI" in question was something a colleague wrote as part of payback to the company for sponsoring his MSc in Machine Learning. And again all the intelligence was in his writing of the code to infer "
[COLOR=rgba(0, 0, 0, 0.9)]the data centres hosting a business or technology service in near real time, using a custom machine learning approach based on hypothesis testing."[/color]
__________________
"Your deepest pools, like your deepest politicians and philosophers, often turn out more shallow than expected." Walter Scott.
Wudang is online now   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 13th September 2021, 07:17 AM   #50
HansMustermann
Penultimate Amazing
 
HansMustermann's Avatar
 
Join Date: Mar 2009
Posts: 19,212
Originally Posted by Mike Helland View Post
The weights for the next generation are generated with random numbers.

This is what allows the AI to devise solutions that the creators did not intend or expect.
At a basic level, yes. I thought that much was clear when I already talked about random movements in that video.

All I'm saying is that none of that implies any kind of intelligence or intent on the program's part, much less having any feelings or anything. It will just do what it's coded to do. In this case crunch probabilities based partially on random numbers. And even those will only ever be those it's allowed to train. Like, the one in the street view link, won't ever train anything else than those two functions. Or in the video, it will only train to apply a force to itself or to a block. It may end up with a function you didn't expect, but it can't decide to train to do crossword puzzles instead. Nor to feel happy, sad, or anything, if you haven't included those.
__________________
Which part of "Ph'nglui mglw'nafh Cthulhu R'lyeh wgah'nagl fhtagn" don't you understand?

Last edited by HansMustermann; 13th September 2021 at 07:26 AM.
HansMustermann is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 13th September 2021, 08:52 AM   #51
Mike Helland
Master Poster
 
Join Date: Nov 2020
Posts: 2,206
Originally Posted by HansMustermann View Post
It will just do what it's coded to do. In this case crunch probabilities based partially on random numbers.
Your position makes sense when we consider something like a neural net that can take a 16x16 image and figure out if it's a 0, 1, 2, 3... 9.

Such a NN would have 32 input nodes and 10 output nodes.

The output will have to be 0 thru 9, or maybe some combination of them. But that's all it will do.

You give it an image, it runs, pops out an answer.

On the other hand, the hide&seek AI, probably has output nodes for "turn left", "turn right", "go forward", "pickup/drop box".

In a sense, this is all it can do.

But that's not all it can do. The NN doesn't pop out one command. It runs constantly and puts out a large number of commands, over and over, generating complexity out of the environment.

They get better and develop new strategies. Stuff the programmers never expected. Stuff they didn't even know was possible in the environment until the agents developed exploits.

Changing gears for a second, are humans universally self-aware? Do they act with purpose? Do they always know what they're looking at, with complete context?

Are they all happy? Are you happy?

Do you think your brain has a "happy" output node?

I don't.

For the same reason there is no "achoo" output node. Deaf people don't say achoo. It's completely learned from others.

Would you say "achoo" if you never heard anyone else sneeze?

Would you say you are happy if you never heard anyone else say they're happy?

Would you know how to work a crossword puzzle without anyone ever explaining it to you?

I kinda doubt it.

Not only do you seem to be oversimplifying AI, you seem to be overestimating people.

Most people that say they're happy are lying. There's enough anti-depressants in the ocean's fish to get a clue that happiness might be some kind of social construct, we're all pursuing even though we don't know why.

Real happiness is feeling sleepy after food or sex. That's truly what the body needs. Something to eat, or it dies. And someone to reproduce with, or the species won't continue.

That is specifically what we are hard wired for. That's why everyone gets sleepy after a meal or sex. The body is happy. Take a break. Rest up. And then go do it again.
Mike Helland is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 13th September 2021, 09:54 AM   #52
HansMustermann
Penultimate Amazing
 
HansMustermann's Avatar
 
Join Date: Mar 2009
Posts: 19,212
Originally Posted by Mike Helland View Post
Your position makes sense when we consider something like a neural net that can take a 16x16 image and figure out if it's a 0, 1, 2, 3... 9.

Such a NN would have 32 input nodes and 10 output nodes.

The output will have to be 0 thru 9, or maybe some combination of them. But that's all it will do.

You give it an image, it runs, pops out an answer.

On the other hand, the hide&seek AI, probably has output nodes for "turn left", "turn right", "go forward", "pickup/drop box".

In a sense, this is all it can do.
Bingo.

Originally Posted by Mike Helland View Post
But that's not all it can do. The NN doesn't pop out one command. It runs constantly and puts out a large number of commands, over and over, generating complexity out of the environment.

They get better and develop new strategies. Stuff the programmers never expected. Stuff they didn't even know was possible in the environment until the agents developed exploits.
Well, yes, that's the whole point of why we're using machine learning.

But that still won't train anything it's not programmed to train. If it's a hide and seek simulation, it can't and won't decide to solve face recognition instead. If it's a street view program, it can't decide to solve airplane design instead. Nor to solve the problem of whether it's actually happy.

Moreover, it's not even just which problem it will solve. It's also what kind of analysis and parameters it's allowed to use. E.g., for the street view link, not only it will only ever train those two functions in the paper, it will only do so by fiddling with the parameters and data it's allowed to fiddle with.

So what I'm saying for the topic we started from is:

A) if I don't include a "happiness" parameter for it, that's that. It won't ever be happy or unhappy, no matter what shortcuts it can find in the function it's allowed to calculate.

B) even if I do decide to include a "happiness" variable (e.g., as a way to motivate it to find solutions), it doesn't have to be something learned. Those can be handled by hard-coded functions elsewhere, that the learning program can't actually mess with. It can (and probably should) be an INPUT for the actual learning program, not an OUTPUT.

E.g., for a self-driving car, happiness can be determined by coming within the minimum possible distance from the destination, while still on a road. E.g., for a glorified Roomba, like the Rosie robot from the Jetsons, it can be something like what percentage of the surface it cleaned. (Possibly even the square of that, so it only feels any significant happiness when it's really close to doing every square inch.)

C) even if the criteria for the "happiness" variable are based on machine learning, they can be learned by a completely different machine, using human input and review. The actual cleaning robot you buy will have those in read-only memory, and won't be able to decide to be happy or unhappy based on different criteria than whatever it shipped with.

D) even if I make it fully self-aware, i.e., not just feeling stuff, but knowing exactly what stuff it feels and exactly why, it's still up to me as the programmer to decide how it should feel about that too. It can be nothing at all. I can even make it feel happy if it remembers that I own its shiny metal ass, in its downtime. (Wouldn't want to interfere with finishing the job to feel happy, when it has a job to do.) Or whatever.


Basically the short version is: in SF basically any program left running long enough develops sentience and starts having needs, wants, feelings, whatever, that nobody could even foresee, much less control. In any RL scenario, this will only happen if the programmers are truly and utterly idiots. In fact, if they don't even know what they're doing and why. (Then again, seeing how some companies hire the lowest bidder...)


As for the difference from humans, it's literally just this:

- for humans it's whatever signals and criteria were useful for keeping a monkey alive long enough to procreate. Preferably more than once. That's how evolution works.

- for a robot it's whatever I give it (assuming I know how,) in order to make it more useful to me.

E.g., since it doesn't need to keep fit (unlike muscles, a motor puts out the same torque, regardless of whether it was used lots or never), I don't need to give it a boredom signal when it's idle. It can just stay idle and save me some money on the power bill until I have something for it to do.
__________________
Which part of "Ph'nglui mglw'nafh Cthulhu R'lyeh wgah'nagl fhtagn" don't you understand?

Last edited by HansMustermann; 13th September 2021 at 10:14 AM.
HansMustermann is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 13th September 2021, 04:26 PM   #53
Mike Helland
Master Poster
 
Join Date: Nov 2020
Posts: 2,206
Originally Posted by HansMustermann View Post
A) if I don't include a "happiness" parameter for it, that's that.
I think that's a bit naive.

What you really need is a few output nodes for hungry and horny and fight/flight.

If those output nodes aren't fired up, you're happy.

Works for robots too. "Do I need to recharge my batteries? Is the floor clean? Now I can rest."

Or do you think that happiness evolved separately from the instinct to survive and reproduce?

Can reptiles be happy?
Mike Helland is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 13th September 2021, 07:30 PM   #54
Skeptic Ginger
Nasty Woman
 
Skeptic Ginger's Avatar
 
Join Date: Feb 2005
Posts: 90,524
Originally Posted by HansMustermann View Post
Luckily enough, though, "self-aware" doesn't mean "conscious." Not the least, lucky because we don't really have definition of "consciousness" that actually works for anything else than philosophy debates. I'd also point out that even those usually fail to capture the meaning used by the layman, which is basically a vague and nebulous "whatever the heck makes me more special than my cat." Most of the definitions of consciousness fail to do that, by just requiring some form of being able to experience sensations OR be aware that stuff exists, depending on whose definition you use. In the process, they allow even a Roomba to qualify as conscious, since it's aware of its surroundings and has sensors that allow it to 'feel' when it bumped into a table leg. That's what tends to happen when you try to move from an undefined "whatever I can claim to be, in order to feel special" to trying to actually have a rigorous definition of exactly what is required.
Not buying this. Consciousness is not all that our brains are doing. A lot of brain activity doesn't pass through the consciousness center. And we've learned a lot about what different parts of the brain are doing.

But it isn't clear (the last time I checked) which brain structure our consciousness occurs in. Finding the structure is one thing. Understanding how it results in a consciousness experience is another. One thing consciousness is not is just a bunch of stored data and an algorithm.

There is no special feeling about it. It's not a philosophical thing it is a biological thing.

Last edited by Skeptic Ginger; 13th September 2021 at 07:53 PM.
Skeptic Ginger is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 13th September 2021, 07:36 PM   #55
Skeptic Ginger
Nasty Woman
 
Skeptic Ginger's Avatar
 
Join Date: Feb 2005
Posts: 90,524
Originally Posted by Wudang View Post
Coincidentally I was listening to this discussion with neurologist Anil Seth who was saying how neuroscience now argues that when we see a bear we don't get a rush fear which triggers adrenalin etc; we get a rush of adrenalin etc and these physiological reactions are then experienced by us as fear.
Did he say how this was known?

It only partly makes sense to me. Say the fear reflex that is triggered when you see that bear doesn't go through your consciousness center. It's a reflex like a lot of those we have which don't rely on the conscious center of the brain because that would slow the reflex down. You pull your hand off the hot stove because the reflex is in the spinal column (or near to it). The pain is registered and you reflexively pull your hand back.

I can see a fight or flight reflex working before your conscious brain is involved. But if it were just the adrenalin, how would you know what you were afraid of?

Last edited by Skeptic Ginger; 13th September 2021 at 07:54 PM.
Skeptic Ginger is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 13th September 2021, 07:48 PM   #56
Skeptic Ginger
Nasty Woman
 
Skeptic Ginger's Avatar
 
Join Date: Feb 2005
Posts: 90,524
Originally Posted by Mike Helland View Post
The weights for the next generation are generated with random numbers.

This is what allows the AI to devise solutions that the creators did not intend or expect.
Coming up with creative solutions can occur with algorithms. There's an interesting lecture online somewhere from the U of WA where the professor inputted the parameters or whatever it is they input and let the program run to see what evolved. The results were surprising like one creature moved by standing up and falling over to reach the next step.

Some of those creatures wouldn't be the fittest. But the point is it didn't take a conscious intelligence to create things the programers didn't specifically program. It just took some data and an algorithm.
Skeptic Ginger is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 13th September 2021, 07:51 PM   #57
Skeptic Ginger
Nasty Woman
 
Skeptic Ginger's Avatar
 
Join Date: Feb 2005
Posts: 90,524
Originally Posted by Wudang View Post
In IT circles, fair enough but when talking to people less familiar with the topic (including many IT people) I think it's important to explain that the "Intelligence" in AI is not what they might think.
If one separates consciousness it becomes easy to explain that intelligence as far as AI is concerned is not that.
Skeptic Ginger is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 13th September 2021, 08:01 PM   #58
Skeptic Ginger
Nasty Woman
 
Skeptic Ginger's Avatar
 
Join Date: Feb 2005
Posts: 90,524
Originally Posted by Mike Helland View Post
How can we be sure we haven't already?

One thing we can say about the brain is that it's a model of reality, inside reality.

If there's panpsychism going on, and all things have "being", than a model of reality has being.

In that case anything that is a dynamic model of reality, has some kind of consciousness.

It's not so much that you and I are conscious beings, but we contain models of reality within reality.

The universe is conscious of itself through those models of itself.

A self driving car does the same thing.
Re the bolded, we know it because brain studies are that advanced. We've learned a lot about brain functions from people with specific lesions. Like a person with one lesion can put their hand correctly in a slot but cannot draw the direction the slot is aimed at.

To determine the answer to the OP question one has to get up to speed with what we know about brain function. You can't figure it out with universe pondering or naval gazing.
Skeptic Ginger is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 13th September 2021, 08:01 PM   #59
arthwollipot
Observer of Phenomena
Pronouns: he/him
 
arthwollipot's Avatar
 
Join Date: Feb 2005
Location: Ngunnawal Country
Posts: 73,421
Originally Posted by Skeptic Ginger View Post
But it isn't clear (the last time I checked) which brain structure our consciousness occurs in. Finding the structure is one thing. Understanding how it results in a consciousness experience is another. One thing consciousness is not is just a bunch of stored data and an algorithm.
I'm not an expert, but as far as I know it is almost certain that consciousness is not localised to a single brain structure. It is a distributed aggregate that is a result of activity throughout the brain.
__________________
We are all #KenBehrens
arthwollipot is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 13th September 2021, 08:07 PM   #60
Skeptic Ginger
Nasty Woman
 
Skeptic Ginger's Avatar
 
Join Date: Feb 2005
Posts: 90,524
Originally Posted by HansMustermann View Post
I may not be the best authority, but yes, I have enough experience with programming, some of it even with AI, to know that a program doesn't do anything else than what its source code says. Which may not be what you intended if you've got bugs, but it's still just exactly what the source code says. If you have a bug, you just wrote code that does something else than you intended, but what the system does is still just follow what the source code says.

If that source code doesn't produce an "I'm unhappy" state of some sort, then it just does not happen.

You don't even need to be an expert in AI or even programming to know that's not the case. You just need to have even the most basic understanding of programming. If you've ever written even a Hello World and understand why it will always produce "Hello World", and not go "Screw you, I'm bored of this Hello World nonsense" even in a trillion years, that's all the knowledge you need.
For the record we have discovered that emotions including things like a sense of fairness evolved by looking at other mammals especially primates. These I consider to be part of the consciousness experience because we don't necessarily experience those emotions outside of consciousness. But that isn't 100% certain. Take a person who had a stroke and doesn't recognize people. The stroke victim might experience certain emotions around the person even when they don't consciously recognize them. I'm not certain this concept is well studied.
Skeptic Ginger is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 13th September 2021, 08:11 PM   #61
Skeptic Ginger
Nasty Woman
 
Skeptic Ginger's Avatar
 
Join Date: Feb 2005
Posts: 90,524
Originally Posted by Gord_in_Toronto View Post
If a machine says, "I'm conscious", how we would we know any different?

I'll toss this into the discussion:

A developer built an AI chatbot using GPT-3 that helped a man speak again to his late fiancée. OpenAI shut it down.

https://www.theregister.com/2021/09/..._openai_gpt_3/

It may be just Eliza on steroids, but so convincing.
I suspect the Turing Test parameters need to be updated.
Skeptic Ginger is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 13th September 2021, 08:17 PM   #62
Skeptic Ginger
Nasty Woman
 
Skeptic Ginger's Avatar
 
Join Date: Feb 2005
Posts: 90,524
Originally Posted by arthwollipot View Post
I'm not an expert, but as far as I know it is almost certain that consciousness is not localised to a single brain structure. It is a distributed aggregate that is a result of activity throughout the brain.
Got a link?

There is the interesting phenomena where a person with their brains cut in half act like 2 different people. I'll have to look for a link on that.
Skeptic Ginger is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 13th September 2021, 08:32 PM   #63
HansMustermann
Penultimate Amazing
 
HansMustermann's Avatar
 
Join Date: Mar 2009
Posts: 19,212
Originally Posted by Mike Helland View Post
I think that's a bit naive.

What you really need is a few output nodes for hungry and horny and fight/flight.

If those output nodes aren't fired up, you're happy.
That's not how it works for a human brain, at the very least. Most of those signals have both a carrot and a stick component. Like, it both feels some like kind of discomfort when a need is not satisfied, AND some form of pleasure when you do satisfy it. Like, since you mention "horny" as one signal, you also have sex feeling good when you do get to do something about it.

But more seriously, what do you think that mediators like endorphins, or the whole so called pleasure centers on the brain are for?

There are SOME situations that only have the stick, so to speak. For example when your muscles are sore, you only have the pain receptors for lactic acid to stop you from keeping doing it, but no pleasure center to reward you from stopping, but that's more like the exception than the rule.

More importantly, it's a rather naive point of view from a psychology point of view, if you think someone can stay happy just because their basic needs are met. I just gave you the example of the boredom that your brain gives you when your basic needs are met and you're not doing something else.


But, be that as it may. IF we decide to implement any of that for robots, yes, it will work in whatever way we want it to work. And not implement those we don't need. Or implement them differently than in humans.

SF has robots work in whatever way the author needs them for a metaphor for something or another about humans. (Hell, even the play that coined the term "robot" had them as a metaphor for the working class at the time.) So they're inexplicably given human needs and psychology, for no explained reason, or they're at the very least programmed so those can emerge. The ones we make IRL are whatever we need for practical purposes, not for literary reasons. Like, we make a Roomba in whatever way best gets your floor clean, not in whatever way makes a good metaphor for a domestic servant.


In fact, if you do want a metaphor, here's the real one if we ever decide to implement those carrot and stick signals: you know that experiment where monkeys got their pleasure centers on the brain stimulated if they press the right button? And then kept pushing it even at the expense of caring for their own wellbeing?

Yeah, THAT's what we'll do to those robots if we ever implement that kind of signals. Except the "button" will be stuff like "yay, I cleaned the floors for the master" or, yes, "yay, the master is finally screwing me again" for a sexbot.

We'll make them so single-mindedly addicted to keeping pushing whatever button we need pushed, that it won't even be funny as a metaphor.
__________________
Which part of "Ph'nglui mglw'nafh Cthulhu R'lyeh wgah'nagl fhtagn" don't you understand?

Last edited by HansMustermann; 13th September 2021 at 08:36 PM.
HansMustermann is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 13th September 2021, 08:33 PM   #64
Mike Helland
Master Poster
 
Join Date: Nov 2020
Posts: 2,206
Originally Posted by Skeptic Ginger View Post
Coming up with creative solutions can occur with algorithms. There's an interesting lecture online somewhere from the U of WA where the professor inputted the parameters or whatever it is they input and let the program run to see what evolved. The results were surprising like one creature moved by standing up and falling over to reach the next step.
I'd like to watch that lecture.

The point is, if an AI has output nodes such as "turn left", "turn right", "walk forward", "pickup/drop", that is what it is programmed to do.

But it still comes up with strategies like cooperation, hiding the ramps, and doing physics exploits that bounce them to other sides of the map.

It'd be like saying, I know the Hokey Pokey (put your left foot in, take your left foot out), and therefore I should be a qualified backup dancer for Justin Timberlake.
Mike Helland is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 13th September 2021, 08:40 PM   #65
HansMustermann
Penultimate Amazing
 
HansMustermann's Avatar
 
Join Date: Mar 2009
Posts: 19,212
Originally Posted by Skeptic Ginger View Post
For the record we have discovered that emotions including things like a sense of fairness evolved by looking at other mammals especially primates. These I consider to be part of the consciousness experience because we don't necessarily experience those emotions outside of consciousness. But that isn't 100% certain. Take a person who had a stroke and doesn't recognize people. The stroke victim might experience certain emotions around the person even when they don't consciously recognize them. I'm not certain this concept is well studied.
Well, sure, because as I was saying, whatever emotions you get are whatever evolution needed for you to keep passing your genes on. That includes whatever is needed for group behaviours that help you in a social species. In a robot, we can just not implement or hard override whatever we don't need or want.
__________________
Which part of "Ph'nglui mglw'nafh Cthulhu R'lyeh wgah'nagl fhtagn" don't you understand?
HansMustermann is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 13th September 2021, 08:59 PM   #66
HansMustermann
Penultimate Amazing
 
HansMustermann's Avatar
 
Join Date: Mar 2009
Posts: 19,212
Originally Posted by Mike Helland View Post
I'd like to watch that lecture.

The point is, if an AI has output nodes such as "turn left", "turn right", "walk forward", "pickup/drop", that is what it is programmed to do.

But it still comes up with strategies like cooperation, hiding the ramps, and doing physics exploits that bounce them to other sides of the map.
Actually my point is that the AI has none of those concepts. Each of them does some random stuff until some chain of actions results in the game being won. Like, one of them discovers that pulling one block to a door wins the game, while the other discovers that pulling the other block to the other door wins the game. They have ZERO concept of the need to work together, the power of friendship, yada yada, like a bad anime for young boys scenario. Just each independently discovered half of the solution, as if the other actor on the team didn't even exist. Hell, even if you make an AI that takes the position of other actors as inputs, it still will be just discovering a random solution, not any actual concept of the value of teamwork or anything.

Essentially a lot of the hype around machine learning is around basically the same Hyperactive Agency Detection that was probably also responsible for religion. It's essentially just like pareidolia, except it's for stuff happening instead of shapes. Just like in pareidolia you can see 3 points as a face, in hyperactive agency detection you see something happening as an action by an actor. Typically involving an intent.

We're pretty much hard-wired for that, because of evolutionary pressures. The cost of mistaking the leaves rustling in the wind for a tiger in the bushes is much lower than the cost of mistaking a real tiger for the wind. Simply put, do the latter once or twice, and you're out of the gene pool. So the evolutionary pressure was to err in the former direction, rather than the latter.

And we still do that. When you hear people talk about stuff like "my computer hates me", yeah, that's what it is. And so it is with seeing stuff like AI actors learning the value of cooperation and whatnot, when it's really just two actors independently learning half of the solution, as if the other one didn't even exist. We see stuff like the AI "trying to cheat" when actually all it's done was randomly stumble upon a function that works, but incidentally uses different data than you thought it would, without having any actual idea that it was "cheating" or anything. Etc.
__________________
Which part of "Ph'nglui mglw'nafh Cthulhu R'lyeh wgah'nagl fhtagn" don't you understand?

Last edited by HansMustermann; 13th September 2021 at 09:04 PM.
HansMustermann is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 13th September 2021, 09:15 PM   #67
Mike Helland
Master Poster
 
Join Date: Nov 2020
Posts: 2,206
Originally Posted by HansMustermann View Post
They have ZERO concept of the need to work together
That's your opinion.

They don't have a concept of what they do. Yet they do it.

I think humans could be wiped of all academic explanations of capitalism and socialism, and yet, somehow find strategies of competition and cooperation depending on the goals.

Finding an "-ism" for how people behave doesn't actually make it any more profound. Often less.
Mike Helland is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 13th September 2021, 09:36 PM   #68
HansMustermann
Penultimate Amazing
 
HansMustermann's Avatar
 
Join Date: Mar 2009
Posts: 19,212
Originally Posted by Mike Helland View Post
That's your opinion.

They don't have a concept of what they do. Yet they do it.
Yes, that's exactly what I was saying.

Originally Posted by Mike Helland View Post
I think humans could be wiped of all academic explanations of capitalism and socialism, and yet, somehow find strategies of competition and cooperation depending on the goals.

Finding an "-ism" for how people behave doesn't actually make it any more profound. Often less.
The difference is that even without an "-ism" people would figure out notions like, in the immortal words of Franklin, "We must all hang together, or, most assuredly, we shall all hang separately." And then plan their actions based on that.

Just each finding a random solution that just coincidentally happens to go in the same place as the other rebels and shooting at the same guys, is not the same thing.

Not the least because IRL you don't have millions of random tries until you discover what works.
__________________
Which part of "Ph'nglui mglw'nafh Cthulhu R'lyeh wgah'nagl fhtagn" don't you understand?

Last edited by HansMustermann; 13th September 2021 at 09:37 PM.
HansMustermann is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 13th September 2021, 10:10 PM   #69
Mike Helland
Master Poster
 
Join Date: Nov 2020
Posts: 2,206
Originally Posted by HansMustermann View Post
Yes, that's exactly what I was saying.
What's your concept of what you're doing, in this thread?
Mike Helland is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 13th September 2021, 10:56 PM   #70
Skeptic Ginger
Nasty Woman
 
Skeptic Ginger's Avatar
 
Join Date: Feb 2005
Posts: 90,524
Originally Posted by Mike Helland View Post
I'd like to watch that lecture.

The point is, if an AI has output nodes such as "turn left", "turn right", "walk forward", "pickup/drop", that is what it is programmed to do.

But it still comes up with strategies like cooperation, hiding the ramps, and doing physics exploits that bounce them to other sides of the map.

It'd be like saying, I know the Hokey Pokey (put your left foot in, take your left foot out), and therefore I should be a qualified backup dancer for Justin Timberlake.
I found it! Usually it takes me forever to find old lectures. The models start about halfway through but the whole lecture is worth watching. The falling creature starts at about minute 33.

https://www.youtube.com/watch?v=ECitgI0x55M
Skeptic Ginger is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 13th September 2021, 11:08 PM   #71
HansMustermann
Penultimate Amazing
 
HansMustermann's Avatar
 
Join Date: Mar 2009
Posts: 19,212
Originally Posted by Mike Helland View Post
What's your concept of what you're doing, in this thread?
Trying to guess where the heck you're running with the goalposts this time, or whether the vague generic stuff you wrote this time is actually going anywhere? I mean, funny you should ask that, when half the time what you reply has no obvious link even to what you're answering to, much less what this talk started from. (Hint: the spoiler part in message #17.) Do you even HAVE a point? Like, a general conclusion you can sum up? Like, WILL the robots inevitably develop feelings (like resenting being owned by humans, as per message #17) when that's not even in the set numbers they're allowed to crunch? Won't they? Or WHAT? I mean, maybe we could even just agree and move on to more productive stuff, but I'd first have to know what it even is.
__________________
Which part of "Ph'nglui mglw'nafh Cthulhu R'lyeh wgah'nagl fhtagn" don't you understand?

Last edited by HansMustermann; 13th September 2021 at 11:24 PM.
HansMustermann is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 13th September 2021, 11:23 PM   #72
Mike Helland
Master Poster
 
Join Date: Nov 2020
Posts: 2,206
Originally Posted by HansMustermann View Post
Do you even HAVE a point? Like, a general conclusion you can sum up?
Yep. If we spend a long time trying to prove to you we have made consciousness, and future deniers like you, how many beings are we torturing to prove a point?
Mike Helland is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 13th September 2021, 11:36 PM   #73
Mike Helland
Master Poster
 
Join Date: Nov 2020
Posts: 2,206
Originally Posted by Skeptic Ginger View Post
I found it! Usually it takes me forever to find old lectures. The models start about halfway through but the whole lecture is worth watching. The falling creature starts at about minute 33.

https://www.youtube.com/watch?v=ECitgI0x55M
Let's be honest. Daniel Dennett. Alan Alda.

Great video.

But that all involves genetic algorithms. "After blah blah iterations".

Those aren't classical algorithms. They're genetic algorithms.
Mike Helland is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 13th September 2021, 11:36 PM   #74
HansMustermann
Penultimate Amazing
 
HansMustermann's Avatar
 
Join Date: Mar 2009
Posts: 19,212
Originally Posted by Mike Helland View Post
Yep. If we spend a long time trying to prove to you we have made consciousness, and future deniers like you, how many beings are we torturing to prove a point?
It's not being a "denier" unless it has actually been proven. Or at least supported. Just disagreeing with some random guy who wants to believe unsupported stuff, is not being a denier, it's just being skeptical.

It works the same for AI, as it does for dowsing, clairvoyance, or flying pigs, really. You don't get to just postulate anything of the form of "X exists" or "Y happens" (which trivially reduces to the former, where X=an instance or way of Y happening) and just call whoever doesn't take your word for it a "denier". You get the burden of proof, silly.

The part about torturing beings is also begging the question, which is to say, circular logic. Nowhere did you support that that's actually the case.

But anyway, if you claim the AI can develop feelings in spite of not even having that in the set of numbers they're allowed to crunch, or experience anything even vaguely similar to torture, then it's your burden to show how. Like, EXACTLY how, not just handwave vague irrelevant trivialities that aren't even connected into a logical argument to that conclusion. I mean, it can even be that you know something relevant that I don't. It may even be that I'm not smart enough to figure it out myself. But that's why you have to show it, if you want that claim taken seriously. And by "show", I mean "present a sound logical argument".

Basically support your point or don't, but the irrelevant handwaving you've been doing here is not it. And an ego wank like "future deniers" is definitely not it. You might earn a right to be a bit snarky after you've supported your point, not INSTEAD of it.
__________________
Which part of "Ph'nglui mglw'nafh Cthulhu R'lyeh wgah'nagl fhtagn" don't you understand?

Last edited by HansMustermann; 13th September 2021 at 11:42 PM.
HansMustermann is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 13th September 2021, 11:47 PM   #75
arthwollipot
Observer of Phenomena
Pronouns: he/him
 
arthwollipot's Avatar
 
Join Date: Feb 2005
Location: Ngunnawal Country
Posts: 73,421
Originally Posted by Skeptic Ginger View Post
Got a link?
No, it's more a general impression that I've come to through the course of being vaguely interested in the subject for many years. I do own a pretty good book about it though - Mapping the Mind by Rita Carter, but it's quite a few years old now and possibly out of date.

Originally Posted by Skeptic Ginger View Post
There is the interesting phenomena where a person with their brains cut in half act like 2 different people. I'll have to look for a link on that.
Alien hand syndrome, where the corpus callosum is severed to prevent the spread of serious uncontrollable epileptic seizures. Yes, it's pretty weird. I had the impression that it was largely misreported or exaggerated, and I don't know the current state of the literature.
__________________
We are all #KenBehrens
arthwollipot is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 13th September 2021, 11:57 PM   #76
Mike Helland
Master Poster
 
Join Date: Nov 2020
Posts: 2,206
Originally Posted by HansMustermann View Post
It's not being a "denier" unless it has actually been proven.
.
Do you experience a sense of "being"?
Mike Helland is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 14th September 2021, 12:24 AM   #77
Puppycow
Penultimate Amazing
 
Puppycow's Avatar
 
Join Date: Jan 2003
Location: Yokohama, Japan
Posts: 26,199
Originally Posted by HansMustermann View Post
We're turning it into the other thread we already had some time ago, but basically let's just say this: the robot won't feel sad or happy or anything else, unless we program it to.
Right. But I wonder whether a high level AI, much more sophisticated than a Roomba, will in effect, be programmed to feel simulated emotions.

Maybe they do it because people want a companion that feels genuine emotions. Maybe it makes it feel that much more "real" to the owner. They want a companion because they are lonely, not just an inanimate object.

Does Alpha Zero feel happy when it wins at chess, I wonder? It's a "neural network" sort of AI, which sounds a lot like a brain of some sort. Aren't our own brains also "neural networks"?

Quote:
Traditional chess engines – including the world computer chess champion Stockfish and IBM’s ground-breaking Deep Blue – rely on thousands of rules and heuristics handcrafted by strong human players that try to account for every eventuality in a game. Shogi programs are also game specific, using similar search engines and algorithms to chess programs.

AlphaZero takes a totally different approach, replacing these hand-crafted rules with a deep neural network and general purpose algorithms that know nothing about the game beyond the basic rules.

To learn each game, an untrained neural network plays millions of games against itself via a process of trial and error called reinforcement learning. At first, it plays completely randomly, but over time the system learns from wins, losses, and draws to adjust the parameters of the neural network, making it more likely to choose advantageous moves in the future. The amount of training the network needs depends on the style and complexity of the game, taking approximately 9 hours for chess, 12 hours for shogi, and 13 days for Go.
So does this "reinforcement learning" involve something similar to the reward system that our own brains use to control our behavior? (I.e., "pain" to discourage unwanted behavior, and "pleasure" to encourage the desired behavior? In this case, winning and not losing the game.)
__________________
A fool thinks himself to be wise, but a wise man knows himself to be a fool.
William Shakespeare

Last edited by Puppycow; 14th September 2021 at 12:26 AM.
Puppycow is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 14th September 2021, 12:38 AM   #78
HansMustermann
Penultimate Amazing
 
HansMustermann's Avatar
 
Join Date: Mar 2009
Posts: 19,212
@Puppycow
Reinforcement is exactly what I had in mind when I wrote points B, C and D in message #52.

However, just because nature does it in ways that involve a carrot and a stick as feelings or sensations, doesn't mean that machine learning has to work the same way. A chess playing AI doesn't actually have to feel anything. Just the fact that they won or lost is adjusting the probabilities matrix accordingly, and they move on.

Think of, dunno, being an artillery observer. In a conflict you're not even feeling much pride or patriotism about. But it's your job. If the shells fell short of the target, you tell the guys with the guns by how many metres, they adjust the angle of the gun. Then it overshoots the target a bit, you tell them again by how many metres, they adjust again. You don't have to feel happy. You might in fact even be horrified by it, if they're shelling a village that also has civilians in it, in addition to the troops you're fighting. Or you might be too scared about yourself, to even feel anything at all about what's happening to the opponents. But you just evaluate and transmit the numbers just the same.

That's how it currently works for AI, really. It doesn't have to feel anything in particular. What reinforces something or not is just the hard outcome. Then you apply a formula to the numbers, and try again. Feelings have no use in it, so we don't program them.

But we COULD decide to in the future, if we think it helps with anything.

Will we do that to robots? Maybe. Hard to say at this point. But we can do it in ways where, as far as the robot is concerned, it's something hard-wired, and they have no way learn to feel any different about something than how we intended it to feel.

And, as I was saying, when we do, it will be less like encouraging a teenager to feel happy to win at chess, as opposed to joining some gang. It will be more like a monkey with an electrode in its brain, getting a pleasure signal if it pressed the right button.
__________________
Which part of "Ph'nglui mglw'nafh Cthulhu R'lyeh wgah'nagl fhtagn" don't you understand?

Last edited by HansMustermann; 14th September 2021 at 12:45 AM.
HansMustermann is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 14th September 2021, 12:53 AM   #79
HansMustermann
Penultimate Amazing
 
HansMustermann's Avatar
 
Join Date: Mar 2009
Posts: 19,212
I mean, even taking emotional support companions, do we REALLY want them to be able to experience any emotion? Or are we going to limit them to what the user wants?

Like, do we want the robot to eventually say, "Jesus Christ, can you even do anything else than whine and wallow in self-pity? You're so pathetic, you're depressing. I'm out of here." Or, as a different emotion, do we really want the robot to start whining about how it has it even worse than you?

I mean, it's not even a hypothetical about robots. We see the same about pets. A lot of people like dogs specifically because they're hard-wired to want to be near someone higher up in the pack hierarchy, and give the signals that humans (mis)interpret as affection. That predictability is the whole point.

Hell, we see the same about human relationships.

Seems to me like we're going to want emotional support robots to stay within a range that counts as support, and we'll calibrate and limit them to stay within that range.
__________________
Which part of "Ph'nglui mglw'nafh Cthulhu R'lyeh wgah'nagl fhtagn" don't you understand?

Last edited by HansMustermann; 14th September 2021 at 12:55 AM.
HansMustermann is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 14th September 2021, 12:56 AM   #80
Puppycow
Penultimate Amazing
 
Puppycow's Avatar
 
Join Date: Jan 2003
Location: Yokohama, Japan
Posts: 26,199
@ Hans:

Thanks for that explanation.
__________________
A fool thinks himself to be wise, but a wise man knows himself to be a fool.
William Shakespeare
Puppycow is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Reply

International Skeptics Forum » General Topics » Science, Mathematics, Medicine, and Technology

Bookmarks

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump


All times are GMT -7. The time now is 10:58 AM.
Powered by vBulletin. Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.

This forum began as part of the James Randi Education Foundation (JREF). However, the forum now exists as
an independent entity with no affiliation with or endorsement by the JREF, including the section in reference to "JREF" topics.

Disclaimer: Messages posted in the Forum are solely the opinion of their authors.