ISF Logo   IS Forum
Forum Index Register Members List Events Mark Forums Read Help

Go Back   International Skeptics Forum » General Topics » Religion and Philosophy
 


Welcome to the International Skeptics Forum, where we discuss skepticism, critical thinking, the paranormal and science in a friendly but lively way. You are currently viewing the forum as a guest, which means you are missing out on discussing matters that are of interest to you. Please consider registering so you can gain full use of the forum features and interact with other Members. Registration is simple, fast and free! Click here to register today.
Tags free will

Reply
Old 5th October 2017, 12:13 PM   #121
Myriad
Hyperthetical
 
Myriad's Avatar
 
Join Date: Nov 2006
Location: Pennsylvania
Posts: 13,103
Originally Posted by barehl View Post
I'm not quite sure what you are trying to say. If you are talking about random behavior then that would obviously be detrimental. However, there is no reason to assume that free will is random.

Are you suggesting that free will implies that you could make detrimental decisions? That actually is a problem which I think the brain solves with an inhibition system. When it is defective, I believe you get things like Tourettes.

I don't mean random. I mean, without rolling dice or anything, deliberately and decisively walking over to my coffee table and bashing my head on it hard enough to fracture my skull.

Why would you call that action random? Probably because no one would predict I would do it. And why is that? Because I have no reason to do it, it being an action with no benefit and a lot of harm.

And that's fair enough. That assessment completely agrees with my own internal narrative of why I don't do it and don't want to do it.

But consider what that implies. If free will isn't the actual (not just hypothetical) possibility of doing something that we wouldn't expect to do and don't want to do because overall some other course of action better conforms to our assessment of benefit, then what is it?

And if that's what it is, and we have it, why don't we, because of it, hurl ourselves off of balconies all the time?

I don't think interdicting the intentions for such actions once they're consciously formed is how neural inhibition systems work. In Tourette's, for example, the failure of an inhibition system at that level of processing would imply that the Tourette's sufferer who blurts out racial or misogynistic slurs is consciously experiencing racially hateful or misogynistic ideation and (unlike the rest of us who are also presumably doing the same) is not inhibited from expressing it. My understanding, though, is that as far as we know such conscious ideation is not involved at all. Free will (as it's usually described) therefore wouldn't enter the picture.

It remains telling to me that the less we can explain a person's actions; the more difficulty we have perceiving external comprehensible reasons for a person to have made a particular choice; the less likely we are to perceive it as an act of free will. Instead, we speak of addictions, compulsions, impulses stemming from the heat of the moment, delusions, traumas, psychoses. Does that mean expressing free will requires acting in expected explainable ways?
__________________
A zømbie once bit my sister...

Last edited by Myriad; 5th October 2017 at 12:14 PM.
Myriad is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 5th October 2017, 03:45 PM   #122
barehl
Master Poster
 
barehl's Avatar
 
Join Date: Jul 2013
Posts: 2,500
Originally Posted by Myriad View Post
Not if consciousness also results from the same computation that enables the volition.
The first problem is that consciousness isn't possible in computational theory. You can't get consciousness regardless of what computation you do.

Quote:
(By the way, I think it is misleading to substitute "algorithm" or "equation" for "computation," because both of those alternative imply a predictable if not already known result and the outcomes of most computations in nature are not predictable except by performing an equivalent computation.)
I'm not exactly sure what this means. If you used a random number generator or you were getting data from an external source then the result would not be known. This is also not directly related to computation. For example, I don't know when my thermostat might kick on and cool or warm the house. I could estimate it more closely if the outside temperature were constant (and the sun didn't move). But it's pretty much unknown. My thermostat isn't related to consciousness though.

Quote:
I hypothesize that the computation in question is the processing of memory and sensory input into a running narrative of agents acting in the world.
I don't know what this means. It almost sounds though like you are saying that you take the current machine state plus incoming data and then use an algorithm to create a result. That sounds like computation.

Quote:
It is narrative understanding that allows us to choose actions based on the outcomes we predict for them rather than solely on immediate response to stimuli. That's a highly advantageous ability.
I'm not sure what a narrative understanding is. I would agree that human behavior takes prediction into consideration. Other primates seem to have some ability to do this.
barehl is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 5th October 2017, 03:53 PM   #123
barehl
Master Poster
 
barehl's Avatar
 
Join Date: Jul 2013
Posts: 2,500
Originally Posted by Myriad View Post
I don't mean (and didn't say) that decisions don't happen because they're pre-determined. Pre-determined decisions would presumably still be decisions. I'm saying decisions don't actually happen at all. They're not things or events in reality. They're retrospective abstractions. They're a narrative device, a feature of a model of the world that's chunked into narratives of agents (including ourselves) acting with volition.
This sounds similar to Dennett's Multiple Drafts Model. I can disprove his model.

Quote:
Decision-making takes place in abstract narrative space. We can give such a narrative event a description such as "free-willed" (or for that matter "pre-determined") quite independently of whether the physical substrate behaves predictably or not. Declaring that determinism or predictability of the substrate means that the decision lacks free will is like declaring that the forests of Narnia must be black because the ink is.
I'm not sure what this means.
barehl is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 5th October 2017, 04:18 PM   #124
barehl
Master Poster
 
barehl's Avatar
 
Join Date: Jul 2013
Posts: 2,500
Originally Posted by SOdhner View Post
I think there's an important distinction between human consciousness, human-like consciousness, and consciousness in general.
Human consciousness should be one type within the set of all possible consciousnesses. But I believe you are only talking about those of at least human intelligence, for example, I assume you aren't talking about chimpanzee consciousness or its equivalent.

Quote:
I think a truly human consciousness is partly defined by it being literally part of a human and couldn't, therefore, exist without a human brain.
I don't think I would agree with that. I can't think of any solid reason why a non-biological device couldn't match the function of a human brain.

Quote:
But that's self-referential and obvious so I'm assuming that's not what we're talking about.
I wouldn't define it that way. It seems like anything that behaves like a human would have to be considered at least partly human. We don't get this distinction with Asimov due to the locked behavior. So, his robots aren't human at all.

Quote:
I think a human-like consciousness would be one that that humans could relate to on a surface level, regardless of whether or not it has any resemblance to human consciousness in deeper ways.
I don't know what this means. If you are talking about some silly program that can pass a tiny fraction of the Turing Test then that isn't human-like at all.

Quote:
This is typically what we're talking about when we talk about AI and stuff, we care about whether or not we can talk to it but if it's totally alien "under the hood" then we wouldn't know or care in most cases. I don't see any reason this would require a human-like body.
I assume you mean something like that you wouldn't care if a car had an internal combustion engine, Stirling engine, steam engine, or electric motor as long as it functioned like a car. Any non-biological device is obviously not going to function in the same way as a brain. But the example of different types of motors are all things that deliver torque to an axle. They are alike in that way. A non-biological device would have to be capable of cognition. The specific workings don't really matter that much as long as it functioned in a human fashion within the same environment. As far as I am aware (I could be wrong) it would have to have a great deal of overlap with human brain structure to do that.

Quote:
Consciousness in general could refer to either of those, or to things that aren't even remotely human-like.
Well, I sort of agree and sort of don't agree. The set of all consciousnesses would be mostly made of unstable varieties, what we would think of as crazy. If we limit to the set of rational or sane consciousnesses then they can't be so very different.

Quote:
There's lots of ways to get information. Plug an ethernet cord into it. I'm just saying it doesn't need to have moving parts or a human-like body.
I don't understand. If I have a "human personality" box and it is plugged into an ethernet cord how could I talk to it? Doesn't it need some kind of camera vision and microphone hearing? Or do you mean by text like if we were exchanging messages?
barehl is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 5th October 2017, 04:27 PM   #125
barehl
Master Poster
 
barehl's Avatar
 
Join Date: Jul 2013
Posts: 2,500
Originally Posted by Argumemnon View Post
What I mean is that for the same known input, the program will deliver the same known output, essentially "switching" from one state to the other without any sort of decision-making process beyond the broadest possible meaning.
I agree. Unless you used some kind of random number generator, the decision is always predictable. However, randomness would typically lead to erratic behavior so it wouldn't help.

Quote:
Of course, the brain works that way too, but because much of the process (which is of a different nature and much more complex) and input is unknown, I wouldn't call it "switching".
Yes, if we only look at habitual action without volition. For that portion, brains usually give the same result, so the behavior should be similar to a finite automaton
barehl is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 5th October 2017, 04:41 PM   #126
barehl
Master Poster
 
barehl's Avatar
 
Join Date: Jul 2013
Posts: 2,500
Originally Posted by Myriad View Post
I don't see that following. "If the idea that stars are other suns were correct, we should have achieved travel between them long ago." Says who, based on what rationale?
These are not directly related. One is observation and the other is transportation. If cognition or as many call it today, General AI or AGI, were possible within computational theory then I would expect that three generations of computer scientists would have either solved it or at least have a firm theory that only required sufficient hardware. We have neither. Now, these are intelligent people and computational theory is one of the most robust theories in existence. And, different approaches have been tried in many different countries and many different labs. So, it doesn't make sense.

Quote:
Almost all approaches have been from the wrong direction. For instance, most work on computer-generated narrative has focused on stringing words together in a way that resembles the way words are strung together in a narrative, rather than on modeling a series of events and then describing it.
That would describe Cyc which was based on philosophy. But what exactly are you defining as a narrative? To the best of my knowledge, a narrative is a description of a sequence of events. Similar words would be story-line, plan, script, or scenario.

Quote:
Does a computer program processing a short video and resolving it into a compact representation, in text form or otherwise, of what's going on — "a man looking at cars has to duck when someone throws a stone at him" — sound like a simple problem that should or would have been solved long ago? Some useful findings might come out of all the self-driving car research.
If someone is turning events into English sentences then I would agree that this is the wrong idea. I doubt anyone is doing that. I'm curious why you believe the self-driving cars are different.
barehl is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 5th October 2017, 04:49 PM   #127
barehl
Master Poster
 
barehl's Avatar
 
Join Date: Jul 2013
Posts: 2,500
Originally Posted by David Mo View Post
(a) After a long deliberation you decide to chose a car instead other.
(b) Somebody throws a stone and you protect automatically your head.
Don’t you see any difference between (a) and (b)? Are they not two different kinds of behaviour?
a) seems to require volition.
b) seems to be automatic.

I'm not sure they are that different in terms of the decision but clearly they are different in terms of how the decision came to be made. I would look for free will in a rather than in b.
barehl is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 5th October 2017, 04:51 PM   #128
barehl
Master Poster
 
barehl's Avatar
 
Join Date: Jul 2013
Posts: 2,500
Originally Posted by Myriad View Post
For example, does the taking of one option occur after the process of deliberation? The narrative clearly says it does, but research in cognitive neuroscience suggests otherwise.
I would disagree with this. Harris made the same claim but he was clearly wrong.
barehl is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 5th October 2017, 04:59 PM   #129
barehl
Master Poster
 
barehl's Avatar
 
Join Date: Jul 2013
Posts: 2,500
Originally Posted by Myriad View Post
I don't mean random. I mean, without rolling dice or anything, deliberately and decisively walking over to my coffee table and bashing my head on it hard enough to fracture my skull.
I don't what you are asking.

Quote:
But consider what that implies. If free will isn't the actual (not just hypothetical) possibility of doing something that we wouldn't expect to do and don't want to do because overall some other course of action better conforms to our assessment of benefit, then what is it?
I still don't know what you are asking.

Quote:
I don't think interdicting the intentions for such actions once they're consciously formed is how neural inhibition systems work. In Tourette's, for example, the failure of an inhibition system at that level of processing would imply that the Tourette's sufferer who blurts out racial or misogynistic slurs is consciously experiencing racially hateful or misogynistic ideation and (unlike the rest of us who are also presumably doing the same) is not inhibited from expressing it. My understanding, though, is that as far as we know such conscious ideation is not involved at all. Free will (as it's usually described) therefore wouldn't enter the picture.
I can't tell what you are trying to say. There is the scene in "Uncle Buck" where he talks to the school principal.

Quote:
It remains telling to me that the less we can explain a person's actions; the more difficulty we have perceiving external comprehensible reasons for a person to have made a particular choice; the less likely we are to perceive it as an act of free will. Instead, we speak of addictions, compulsions, impulses stemming from the heat of the moment, delusions, traumas, psychoses. Does that mean expressing free will requires acting in expected explainable ways?
No, not to me. I think some others have defined it that way though.
barehl is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 5th October 2017, 10:35 PM   #130
David Mo
Graduate Poster
 
David Mo's Avatar
 
Join Date: Aug 2012
Posts: 1,879
Originally Posted by Myriad View Post
Behaviors are real. The options are real, usually, The "process of deliberation," the "subject," the "evaluation," and the "taking of one of them" are elements of a subjective narrative description of what's going on.

To the extent that it's a narrative we all generally share and agree to apply, you can call it a "fact" if you want to. But because it's actually a narrative, it can't falsify other different narratives. For example, does the taking of one option occur after the process of deliberation? The narrative clearly says it does, but research in cognitive neuroscience suggests otherwise.
I am really amazed with your comment. I don’t understand how can exist a mental process without a subject that acts. “I decide” but I don’t exist? I don’t understand how do you speak of a “volitional action” (your words) without a “process of deliberation”. I don’t understand how do you can deny that you consider the advantages and disadvantages of different cars before to chose one of them. I am afraid that you have some verbal confusions.

What is a “narrative” is to speak of “computational”, “”negotiating with the environment”, etc. This is your own narrative that allows you to reject any evidence in saying that it is “a narrative”. But my description is neutral in respect of any interpretation –”narrative” in your words– of the facts. You can affirm or deny the “volition action” – your words – or present this action as “computational” or not. These are subsequent problems to the description of the “volition action”. Therefore, you cannot deny that a “volition action” is preceded by an evaluation of the alternatives without falling in the absurd.

I would be very grateful if you tell us what “neuroscientific” sources suggest that the taking of an option is previous to the deliberative process. It is an interesting point, but it seems to me that they will say other “story” than you imagine.
David Mo is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 5th October 2017, 10:38 PM   #131
David Mo
Graduate Poster
 
David Mo's Avatar
 
Join Date: Aug 2012
Posts: 1,879
Originally Posted by barehl View Post
a) seems to require volition.
b) seems to be automatic.

I'm not sure they are that different in terms of the decision but clearly they are different in terms of how the decision came to be made. I would look for free will in a rather than in b.
This is an evident conclusion!
David Mo is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 6th October 2017, 02:27 AM   #132
David Mo
Graduate Poster
 
David Mo's Avatar
 
Join Date: Aug 2012
Posts: 1,879
Originally Posted by David Mo View Post
I am really amazed with your comment. I don’t understand how can exist a mental process without a subject that acts. “I decide” but I don’t exist? I don’t understand how do you speak of a “volitional action” (your words) without a “process of deliberation”. I don’t understand how do you can deny that you consider the advantages and disadvantages of different cars before to chose one of them. I am afraid that you have some verbal confusions.

What is a “narrative” is to speak of “computational”, “”negotiating with the environment”, etc. This is your own narrative that allows you to reject any evidence in saying that it is “a narrative”. But my description is neutral in respect of any interpretation –”narrative” in your words– of the facts. You can affirm or deny the “volition action” – your words – or present this action as “computational” or not. These are subsequent problems to the description of the “volition action”. Therefore, you cannot deny that a “volition action” is preceded by an evaluation of the alternatives without falling in the absurd.

I would be very grateful if you tell us what “neuroscientific” sources suggest that the taking of an option is previous to the deliberative process. It is an interesting point, but it seems to me that they will say other “story” than you imagine.
ADDED ENDNOTE: If you believe that you are supported by Sam Harris you are in a mistake. Harris uses the word "decision" around fifty times in his best seller about moral. There must be a reason, is it not? Apart from the fact that he is not a neuroscientist.
David Mo is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 6th October 2017, 03:13 AM   #133
David Mo
Graduate Poster
 
David Mo's Avatar
 
Join Date: Aug 2012
Posts: 1,879
Originally Posted by Myriad View Post
Just to be clear, yes, I am and have been using "narrative" in a very general sense, not implying necessary fictionality (nor non-fictionality). Synonyms would include "report," "record," "description," and "account." "Fiction" is a related word (so it would appear in a thesaurus) but is not a synonym because many narratives are non-fictional. "Story" is a synonym, but one must keep in mind that while some usages of "story" imply fictionality, the word in general does not; e.g. the "lead story" in a news report.
Thank you for the clarification. I was using “narrative” in the sense of a story. I prefer the word “description” that is more neutral in respect of the referred facts. I continue speaking of description and interpretation or explanation as different activities. Keep this in mind, please.

I don’t mind if “decision” or “deliberative process” are “narratives” or descriptions. The problem is why do you think that they are “fictional”. Or more fictional than "computational".

Last edited by David Mo; 6th October 2017 at 03:15 AM.
David Mo is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 6th October 2017, 07:58 AM   #134
SOdhner
Graduate Poster
 
Join Date: Apr 2010
Location: Arizona
Posts: 1,156
Originally Posted by barehl View Post
I assume you aren't talking about chimpanzee consciousness or its equivalent.
I defined it pretty clearly. I specifically said human.

Originally Posted by barehl View Post
I can't think of any solid reason why a non-biological device couldn't match the function of a human brain.
I guess I'm not understanding how you're drawing the line between "human" and "human-like". You clearly don't like that I'm using "human" to mean "literally human" but I don't know where the cutoff would be for you. Dogs have brains that are way more like ours than any machine would ever be, would you say that dogs have "human consciousness"?

Originally Posted by barehl View Post
It seems like anything that behaves like a human would have to be considered at least partly human.
Are you saying "like a human" isn't "human-like" but should instead be called "human"? Then what would be "human-like"?

Originally Posted by barehl View Post
I don't know what this means.
The way our consciousness is generated, our decision-making, etc. is all founded in the way our biological brains work. If we ever create non-biological consciousness, it won't work in the same way. The end result may still feel familiar, but the underlying processes won't be at all human.

If you watch an old show where picture quality isn't an issue (because I don't want to strain this analogy too much) on both a modern flatscreen streaming from Netflix and an old CRT television hooked up to a VHS tape you're seeing the same show but the method of generating the image is totally different.

Likewise, we might someday create an artificial consciousness but no matter how human-like we get it if you look under the hood it won't be the same as a human consciousness.

Originally Posted by barehl View Post
If you are talking about some silly program that can pass a tiny fraction of the Turing Test then that isn't human-like at all.
"Some silly program" normally wouldn't be said to be conscious, so I'm not sure what you're talking about.

Originally Posted by barehl View Post
As far as I am aware (I could be wrong) it would have to have a great deal of overlap with human brain structure to do that.
I think you're totally wrong on that one. In fact, I think trying to copy human brain structure would make it immensely harder.

Originally Posted by barehl View Post
The set of all consciousnesses would be mostly made of unstable varieties, what we would think of as crazy. If we limit to the set of rational or sane consciousnesses then they can't be so very different.
You jump right to 'crazy' which I think is far less likely than 'alien'. Either way, the point is that while we're talking about human and human-like consciousnesses there's the potential for consciousnesses that don't fit into either category.

Originally Posted by barehl View Post
I don't understand. If I have a "human personality" box and it is plugged into an ethernet cord how could I talk to it? Doesn't it need some kind of camera vision and microphone hearing? Or do you mean by text like if we were exchanging messages?
Are you still conscious if nobody can talk to you?

Anyway, yes, obviously you would include some way to communicate otherwise what's the point? But you said that the consciousness would need to either have a physical robotic body or maybe a simulated body, and I'm saying neither of those things would be required.

It could be sufficiently human-like without a 'body' of any kind. That's all I'm saying.
SOdhner is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 6th October 2017, 08:13 AM   #135
SOdhner
Graduate Poster
 
Join Date: Apr 2010
Location: Arizona
Posts: 1,156
Originally Posted by David Mo View Post
I would be very grateful if you tell us what “neuroscientific” sources suggest that the taking of an option is previous to the deliberative process.
I'm guessing that that was a reference to the studies that found that our conscious idea of why we made a particular decision comes after the decision is made and may not match the ACTUAL decision-making process.

1. Brain does its committee thing, one part of brain has fond memories of Sloth from Goonies saying "rocky road", other parts of brain chime in for various reason, committee makes choice
2. Conscious brain says "I'll take the rocky road, please!"
3. Conscious brain says "Uh... I wanted the rocky road because... I like marshmallows!" even though love of marshmallows was not at all the deciding factor.

It's been a long time since I read about this in a primary source, so I may have gotten it a bit wrong. Anyway, it's a good example of how our consciousness is kind of a mess but it doesn't really support what Myriad was saying.

That being said, I do agree with a lot of what they're saying if I'm understanding correctly. Since the brain is all chemicals and signals, "making a decision" isn't significantly different than any other brain function. Either way, our brains process stimuli and bounce it around while using various rules and guidelines and things, and then we take some action. The things that physically happen for something where we would call it "making a decision" versus "spazzing out" or "acting on reflex" aren't really different from an objective point of view.
SOdhner is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 6th October 2017, 09:44 AM   #136
MuDPhuD
Muse
 
Join Date: Feb 2011
Posts: 638
Originally Posted by David Mo View Post
I am really amazed with your comment. I don’t understand how can exist a mental process without a subject that acts. “I decide” but I don’t exist? I don’t understand how do you speak of a “volitional action” (your words) without a “process of deliberation”. I don’t understand how do you can deny that you consider the advantages and disadvantages of different cars before to chose one of them. I am afraid that you have some verbal confusions.

What is a “narrative” is to speak of “computational”, “”negotiating with the environment”, etc. This is your own narrative that allows you to reject any evidence in saying that it is “a narrative”. But my description is neutral in respect of any interpretation –”narrative” in your words– of the facts. You can affirm or deny the “volition action” – your words – or present this action as “computational” or not. These are subsequent problems to the description of the “volition action”. Therefore, you cannot deny that a “volition action” is preceded by an evaluation of the alternatives without falling in the absurd.

I would be very grateful if you tell us what “neuroscientific” sources suggest that the taking of an option is previous to the deliberative process. It is an interesting point, but it seems to me that they will say other “story” than you imagine.
Obviously I dont know what another person intends to say, but I think this may be the line of research being mentioned.
"As humans, we experience the ability to consciously choose our actions as well as the time at which we perform them. It has been postulated, however, that this subjective experience of freedom may be no more than an illusion [1], [2] and even our goals and motivations can operate outside of our consciousness [3]."

"Comparable to the original study [30] subjects' intentions could be read out approximately seven seconds before they became conscious. Given the haemodynamic delay, it is likely that this reflects neural processes that occurred even earlier by a few seconds."

"...detailed questionnaires exploring subjects' thoughts before and during the decision confirmed that decisions were made spontaneously and subjects were unaware of the evolution of their decision outcomes"

I think it is very important to distinguish the type of decision in these studies with the more lengthly deliberation you describe when discussing the process of deciding which car to buy. They are not the the same, and may very well not be subserved by the same neural circuits.

From the link I provide above:
"Subjects were instructed to passively view the stream of letters, relax, and refrain from thinking about the upcoming task. The index- and middle fingers of both hands rested on 4 buttons of two joysticks. Subjects were free to decide, at any time, to press the left or the right button with the corresponding index finger. As soon as they were aware of their decision, subjects were to note the letter presented on the screen. The time at which subjects are first aware of their decision will hereafter be referred to as the “decision time” in short. Subjects were instructed to then immediately perform the chosen action without any delay."

The decisions under study here are "free", but are made quickly over a period of seconds to minutes. This is the fast thinking, immediate decision making, intuitive system. Mulling over the pluses and minuses of various car models may take hours, days, or longer. This is the slow thinking deliberative system. As I said these are different processes and no doubt utilize different circuits.

Nevertheless, with regard to the fast acting intuitive thinking system it does seem that the free "conscious" decision actually lags behind the brain process which has already made the choice. In this case, free will clearly is an illusion.
MuDPhuD is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 6th October 2017, 05:40 PM   #137
barehl
Master Poster
 
barehl's Avatar
 
Join Date: Jul 2013
Posts: 2,500
Originally Posted by SOdhner View Post
I guess I'm not understanding how you're drawing the line between "human" and "human-like". You clearly don't like that I'm using "human" to mean "literally human" but I don't know where the cutoff would be for you. Dogs have brains that are way more like ours than any machine would ever be, would you say that dogs have "human consciousness"?
You excluded things less intelligent than humans so dogs would be excluded. Mammals share more behavior with humans than non-mammals and primates share more behavior than non-primates (including dogs). However, I doubt that even Homo-Erectus or Neanderthals were that close.

I'm saying that it should be possible for a non-biological device to act completely human or perhaps indistinguishably human. That would never be possible for a dog or chimpanzee.

Quote:
The way our consciousness is generated, our decision-making, etc. is all founded in the way our biological brains work. If we ever create non-biological consciousness, it won't work in the same way. The end result may still feel familiar, but the underlying processes won't be at all human.

If you watch an old show where picture quality isn't an issue (because I don't want to strain this analogy too much) on both a modern flatscreen streaming from Netflix and an old CRT television hooked up to a VHS tape you're seeing the same show but the method of generating the image is totally different.
It wouldn't have to work the same way.

Quote:
Likewise, we might someday create an artificial consciousness but no matter how human-like we get it if you look under the hood it won't be the same as a human consciousness.
I'm not understanding this point. You seem to be making an arbitrary distinction whereas I'm not. I'm saying that the closer something behaves to human the more human-like it is whether it's another species, an alien, or something non-biological.

Quote:
"Some silly program" normally wouldn't be said to be conscious, so I'm not sure what you're talking about.
Like Watson. That isn't human at all, not even a tiny bit.

Quote:
I think you're totally wrong on that one. In fact, I think trying to copy human brain structure would make it immensely harder.
Why?

Quote:
You jump right to 'crazy' which I think is far less likely than 'alien'. Either way, the point is that while we're talking about human and human-like consciousnesses there's the potential for consciousnesses that don't fit into either category.
I'm not sure that's possible. Any working consciousness should overlap with human.

Quote:
Are you still conscious if nobody can talk to you?
If you are alone you could be consciousness but unable to talk to anyone. So the box is in solitary confinement?

Quote:
Anyway, yes, obviously you would include some way to communicate otherwise what's the point? But you said that the consciousness would need to either have a physical robotic body or maybe a simulated body, and I'm saying neither of those things would be required.
And yet it would have a human personality?

Quote:
It could be sufficiently human-like without a 'body' of any kind. That's all I'm saying.
It sounds like you are making a Mary's Room argument.
barehl is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 6th October 2017, 05:43 PM   #138
barehl
Master Poster
 
barehl's Avatar
 
Join Date: Jul 2013
Posts: 2,500
Originally Posted by MuDPhuD View Post
Nevertheless, with regard to the fast acting intuitive thinking system it does seem that the free "conscious" decision actually lags behind the brain process which has already made the choice. In this case, free will clearly is an illusion.
That was Harris' contention. I think he is clearly wrong.
barehl is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 6th October 2017, 10:55 PM   #139
David Mo
Graduate Poster
 
David Mo's Avatar
 
Join Date: Aug 2012
Posts: 1,879
Originally Posted by MuDPhuD View Post
Obviously I dont know what another person intends to say, but I think this may be the line of research being mentioned.
(...)
The decisions under study here are "free", but are made quickly over a period of seconds to minutes. This is the fast thinking, immediate decision making, intuitive system. Mulling over the pluses and minuses of various car models may take hours, days, or longer. This is the slow thinking deliberative system. As I said these are different processes and no doubt utilize different circuits.

Nevertheless, with regard to the fast acting intuitive thinking system it does seem that the free "conscious" decision actually lags behind the brain process which has already made the choice. In this case, free will clearly is an illusion.
Thank you for the reference. For my part I recommend you other article of Soon et al. and the more classic of Libet. I don’t see that any of them speak of intervals of more than ten seconds –Libet, far less– between the unconscious decision and the awareness of it. I have missed something?

You are right in that these studies don’t analyze long term decisions. They are limited to immediate decisions; that is a different thing. We cannot extrapolate their results.

None of them refutes the existence of real decisions. Not even that these decisions are free. Libet and Bode et al. explicitly speak of “free decisions”. They only claim that these decisions are taken at the unconscious level.

That is what I was saying to Myriad in a previous comment: “It is an interesting point, but it seems to me that they will tell other “story” than you imagine”.
David Mo is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 6th October 2017, 11:00 PM   #140
David Mo
Graduate Poster
 
David Mo's Avatar
 
Join Date: Aug 2012
Posts: 1,879
Originally Posted by barehl View Post
That was Harris' contention. I think he is clearly wrong.
Absolutely wrong. He draws erroneous inferences from the studies that he mentions. See my previous comment. Soon and Libet are -erroneously- quoted by Harris.
David Mo is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 9th October 2017, 07:56 AM   #141
SOdhner
Graduate Poster
 
Join Date: Apr 2010
Location: Arizona
Posts: 1,156
Originally Posted by barehl View Post
You excluded things less intelligent than humans so dogs would be excluded.
This is the first it has come up, so no I didn't (maybe you're thinking of where I was talking about how I define "human consciousness"?) - but either way, you're avoiding the question - where do you draw the line between "human" and "human-like" consciousness?

Originally Posted by barehl View Post
However, I doubt that even Homo-Erectus or Neanderthals were that close.
Neanderthals were extremely close, as far as we can tell.

Originally Posted by barehl View Post
It wouldn't have to work the same way.
Right, agreed. All I was saying is that since the mechanics of the consciousness would be totally different from how it works in humans, I would personally use the term "human-like" rather than "human" because when I say "human consciousness" I'm referring to the whole thing, not just the superficial outward appearance.

Originally Posted by barehl View Post
I'm saying that the closer something behaves to human the more human-like it is whether it's another species, an alien, or something non-biological.
Sure, that's fine. We're stuck in a stupid semantic argument here that I don't care about. All I was saying was that I define "human consciousness" as specifically and literally human (so, not artificial) and that if something seemed reasonably like human consciousness but wasn't actually (for example, artificial stuff) then I would call it "human-like consciousness" and while I could probably come up with a bunch of other terms for other cases since they're not relevant here I'm not bothering.

Originally Posted by barehl View Post
Like Watson. That isn't human at all, not even a tiny bit.
But unless you're trying to say that Watson had consciousness, it's totally irrelevant.

Originally Posted by barehl View Post
Why?
The human brain is complicated, inefficient, and based on being part of an organic (specifically human) body. It works in ways that make it really bad at a lot of the things we would want an artificial consciousness to do. It has built-in "features" that would be irrelevant to an artificial consciousness. It relies on input from organic systems that wouldn't be there. It goes crazy. It has lots of biases. It learns and grows in unpredictable ways. While someone, someday, may try to make an artificial brain to prove that they can it would be an enormous amount of work for a poor payoff. When (if) we see artificial consciousness it will be wildly different from human consciousness by design.

Originally Posted by barehl View Post
I'm not sure that's possible. Any working consciousness should overlap with human.
I'd say you're lacking in imagination.

Originally Posted by barehl View Post
If you are alone you could be consciousness but unable to talk to anyone. So the box is in solitary confinement?
I'm just saying we don't define consciousness based on whether or not you can talk to someone.

Originally Posted by barehl View Post
And yet it would have a human personality?
Human-like, sure.

Originally Posted by barehl View Post
It sounds like you are making a Mary's Room argument.
Nope.
SOdhner is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 9th October 2017, 11:25 AM   #142
Myriad
Hyperthetical
 
Myriad's Avatar
 
Join Date: Nov 2006
Location: Pennsylvania
Posts: 13,103
Originally Posted by barehl View Post
The first problem is that consciousness isn't possible in computational theory. You can't get consciousness regardless of what computation you do.

So you've repeatedly claimed, but I haven't seen a persuasive argument.

Quote:
I'm not exactly sure what this means. If you used a random number generator or you were getting data from an external source then the result would not be known. This is also not directly related to computation. For example, I don't know when my thermostat might kick on and cool or warm the house. I could estimate it more closely if the outside temperature were constant (and the sun didn't move). But it's pretty much unknown. My thermostat isn't related to consciousness though.

It means that there is more than one type of algorithmic predictability. Some processes and computations are predictable in the sense of the existence of a computational shortcut. You don't have to grind through every intermediate state change to predict that a durable reliable clock will read 12:00 sometime around noon several days from now. Others are predictable only by duplicating the entire computation step by step. There's no closed-form equation you can plug in starting conditions and make it tell you the state of a three-body system after some specified amount of time, or whether or not a Turing-complete automaton will reach a given state.

Rightly or wrongly, in conversational language, "equation" and "algorithm" tend to imply the former type of predictability, so I prefer the more general term "computation" (which subsumes the evaluation of equations and the execution of algorithms) when the latter type of predictability (which Wolfram terms "computationally irreducible") is likely to be in play.

Quote:
I don't know what this means. It almost sounds though like you are saying that you take the current machine state plus incoming data and then use an algorithm to create a result. That sounds like computation.

It's a computation. There is probably no "algorithm" involved that is in any way simpler than the computation itself, so "using an algorithm" may be a very misleading insertion.

Quote:
I'm not sure what a narrative understanding is. I would agree that human behavior takes prediction into consideration. Other primates seem to have some ability to do this.

Narrative understanding is the summarizing of information in memory and sensory input into a narrative of things and agents acting in the world.

"Some moving regions of a particular color tint and a particular shape to the left side of my visual field" is not a narrative understanding.

"There is a bear nearby, to my left" is not a narrative understanding, but it is a big step closer to it.

"There is a bear approaching from my left, but it hasn't seen me yet because it's looking for fish in the stream" is a narrative understanding.

Getting from the first to the third is the massive computational feat that both engenders and requires conscious awareness.
__________________
A zømbie once bit my sister...
Myriad is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 9th October 2017, 11:48 AM   #143
Myriad
Hyperthetical
 
Myriad's Avatar
 
Join Date: Nov 2006
Location: Pennsylvania
Posts: 13,103
Originally Posted by barehl View Post
This sounds similar to Dennett's Multiple Drafts Model. I can disprove his model.

A disproof of Dennett's multiple drafts model that passed peer review would certainly be a publishable paper.

I can do fifty push-ups in two minutes, if I train for a few months.

What do these "can do's" have to do with anything?

Quote:
I'm not sure what this means.

Making decisions is a story we tell about a process, not a reliable description of how the process works.

You might be familiar with a more common argument: that one can simulate a hurricane, but the simulation won't be wet. (The parallel is, one can perform a computation, but the computation won't be making a decision.)

The reason the simulation won't be wet, though, is not because the hurricane in the simulation is missing any essential property of water. Indeed, in the sense that a chemist or physicist means when she says something is wet, that it, that thing being influenced by the presence of water molecules that have certain effects e.g. evaporating from surfaces thereby cooling them, a complete and accurate simulation would indeed be just as wet. The distribution of simulated water molecules and their effects on other components of the simulation would indeed be part of the computation.

But the colloquial sense of the word "wet" doesn't refer to that; it refers to the experience of touching and interacting with water, feeling its viscosity and its evaporative cooling, and experiencing its secondary effects (such as how the dog smells).

"Wet" in that sense is, yes you guessed it, like all so-called qualia, an element of narrative. It's not reality. Nothing would be wet (in that sense) if there were no conscious brains around to experience it.

Everything real about wetness is in fact right there in the simulation. What's missing in the simulation is the narrative of wetness, and that's only missing because (a) we don't apply our experiential narratives to the indirect experience of evaluating the simulation, and (b) there is no consciousness inside the simulation to experience it directly.

Because we do tend to personify things, we do sometimes tell narratives of simple non-conscious devices like thermostats "deciding" to turn on, or devices perceived as overly complex like printers "deciding" to print (perhaps only after "coaxing"). But we make a distinction between that usage and "real" human decisions that involve subjectively difficult cognition. My point is that those "real" human decisions are, like the wetness of water or the simpler decisions of simpler machines, just narrative descriptions, not actual mechanics.
__________________
A zømbie once bit my sister...
Myriad is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 9th October 2017, 11:10 PM   #144
David Mo
Graduate Poster
 
David Mo's Avatar
 
Join Date: Aug 2012
Posts: 1,879
Originally Posted by Myriad View Post
Because we do tend to personify things, we do sometimes tell narratives of simple non-conscious devices like thermostats "deciding" to turn on, or devices perceived as overly complex like printers "deciding" to print (perhaps only after "coaxing"). But we make a distinction between that usage and "real" human decisions that involve subjectively difficult cognition. My point is that those "real" human decisions are, like the wetness of water or the simpler decisions of simpler machines, just narrative descriptions, not actual mechanics.
I do not know anyone who claims that a thermostat decides something. But everybody, except you perhaps, says that Peter decided to go to the theater yesterday.

I am not an expert in astronomy and the three body problem, but I can assure you that the scientists of the Europa Clipper mission know perfectly where their spacecraft will encounter Jupiter’s moon. This is a prediction. And I don’t know if Daisy will finally decide to come to my rendezvous. This is a decision. I am sure that if Daisy was a n-body system I would not have so many problems to meeting her. However Daisy is Daisy and Europa is Europa. Daisy comes when she wants and Europa is a constant into a determinate system. The problem is not the difference between them, but how to describe this difference.

Originally Posted by Myriad View Post
...just narrative descriptions, not actual mechanics.
What now, then? Is “narrative” fictional or not?
David Mo is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 10th October 2017, 02:38 AM   #145
wea
Critical Thinker
 
wea's Avatar
 
Join Date: Mar 2015
Location: EU
Posts: 348
Originally Posted by David Mo View Post
I am not an expert in astronomy and the three body problem, but I can assure you that the scientists of the Europa Clipper mission know perfectly where their spacecraft will encounter Jupiter’s moon. This is a prediction.
No they don't. They don't write down a system with the differential equations describing the motion of the three bodies and "solve" it. They can start with a solution for a reduced problem (such as two-body) and correct by successive approximations. Or they can simulate a "grid" of random trajectories, choose the most promising and optimize by variational methods. Or ... Anyway, they are compelled to simulate numerically (integrate) the trajectory, that is going to diverge (indistinguishable from chaotic motion from a practical point of view) outside the computed interval. And it's only three bodies.

Originally Posted by David Mo View Post
And I don’t know if Daisy will finally decide to come to my rendezvous. This is a decision. I am sure that if Daisy was a n-body system I would not have so many problems to meeting her. However Daisy is Daisy and Europa is Europa. Daisy comes when she wants and Europa is a constant into a determinate system.
Daisy, in this comparison, is the result of the interaction of several, possibly thousands of subsystems, each composed among others things by millions of neurons. If Daisy were a 100-body system I assure you'll have many problems to meet her. She could well be even more composite.
wea is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 10th October 2017, 03:09 AM   #146
David Mo
Graduate Poster
 
David Mo's Avatar
 
Join Date: Aug 2012
Posts: 1,879
Originally Posted by wea View Post
No they don't. They don't write down a system with the differential equations describing the motion of the three bodies and "solve" it. They can start with a solution for a reduced problem (such as two-body) and correct by successive approximations. Or they can simulate a "grid" of random trajectories, choose the most promising and optimize by variational methods. Or ... Anyway, they are compelled to simulate numerically (integrate) the trajectory, that is going to diverge (indistinguishable from chaotic motion from a practical point of view) outside the computed interval. And it's only three bodies.
The method doesn’t matter now. The fact is that many space probes has been sent to diverse planets and moons and they arrived at place. Even small space objects as asteroids (see Rosetta). This is prediction. Is it not?


Originally Posted by wea View Post
Daisy, in this comparison, is the result of the interaction of several, possibly thousands of subsystems, each composed among others things by millions of neurons. If Daisy were a 100-body system I assure you'll have many problems to meet her. She could well be even more composite.
Do you have any evidence that Daisy’s decision not come to my rendezvous “is the result of the interaction of several, possibly thousands of subsystems, each composed among others things by millions of neurons” or is this a “fictional narrative”, that is to say metaphysical especulation? Can you predict Daisy’s absence like scientists can predict the presence of Stein (see Rosetta)? Why not?

Last edited by David Mo; 10th October 2017 at 03:10 AM.
David Mo is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 10th October 2017, 03:56 AM   #147
wea
Critical Thinker
 
wea's Avatar
 
Join Date: Mar 2015
Location: EU
Posts: 348
Originally Posted by David Mo View Post
Can you predict Daisy’s absence like scientists can predict the presence of Stein (see Rosetta)? Why not?
Complexity. Several orders of magnitude. I can trace a trajectory for the center of mass of a body, whose initial conditions are well known, moving in a simple gravitational field. With a comparable effort, I can modify animal behaviour, for instance have a mouse turn by optogenetic stimulation of just a few neurons. I can't predict Daisy's absence as I can't predict (but statistically) if/when two specific molecules of air in this room will ever meet; in the second case the principle of parsimony suggests it's (yet) just too complex to track every single particle, why should I resort to another entity for Daisy?
wea is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 10th October 2017, 06:17 AM   #148
jrhowell
Thinker
 
Join Date: Jun 2009
Posts: 243
Originally Posted by David Mo View Post
I do not know anyone who claims that a thermostat decides something.
I will do so.

Like a thermostat, our decisions are a product of our construction and history. The root causes are outside of ourselves and materialistically determined. The decision making process is far more complex in humans, but there is an essential sameness at the core. The main difference is that the decision making process of a thermostat is easily understandable while that of a human is so complex as to be inscrutable.

Added: Even a basic mechanical thermostat does not just turn on below a certain temperature and turn off above it. Most contain a "heat anticipator" that turns it off before the target temperature is reached to compensate for the expected additional temperature rise afterward.

Last edited by jrhowell; 10th October 2017 at 06:23 AM.
jrhowell is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 10th October 2017, 07:15 AM   #149
Myriad
Hyperthetical
 
Myriad's Avatar
 
Join Date: Nov 2006
Location: Pennsylvania
Posts: 13,103
Originally Posted by David Mo View Post
I do not know anyone who claims that a thermostat decides something. But everybody, except you perhaps, says that Peter decided to go to the theater yesterday.

Everybody says that, yes, as do I. Just like everybody says the sun rises, and even professional astronomers say the constellations rotate around the celestial poles.

Again, what we say, how we describe things, does not dictate reality.

Quote:
I am not an expert in astronomy and the three body problem, but I can assure you that the scientists of the Europa Clipper mission know perfectly where their spacecraft will encounter Jupiter’s moon. This is a prediction. And I don’t know if Daisy will finally decide to come to my rendezvous. This is a decision. I am sure that if Daisy was a n-body system I would not have so many problems to meeting her. However Daisy is Daisy and Europa is Europa. Daisy comes when she wants and Europa is a constant into a determinate system. The problem is not the difference between them, but how to describe this difference.

The distinction I'm making isn't between absolute predictability and absolute unpredictability; it's between computationally reducible predictions (which often take the form of solutions to closed-form equations, and which were the basis of most scientific modeling for several centuries up until just recently), and computationally irreducible predictions.

Quote:
What now, then? Is “narrative” fictional or not?

We don't get to know that. Our narratives are models. Even the most rigorous science doesn't ask whether models are true or not; only whether or not they make successful predictions, and whether or not there's a better model.

There is plenty of evidence that our subjective narratives of how ideation comes about in our minds is an unreliable model of the actual process. Examples range from commonplace experience, such as people who don't believe they are influenced by advertising when research demonstrates they are influenced by advertising, to extreme cases such as split-brain experiments demonstrating that the speaking brain will generate arbitrary explanations for behavior that's actually caused more directly by a stimulus the consciousness doesn't perceive.
__________________
A zømbie once bit my sister...

Last edited by Myriad; 10th October 2017 at 07:17 AM.
Myriad is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 10th October 2017, 02:54 PM   #150
SOdhner
Graduate Poster
 
Join Date: Apr 2010
Location: Arizona
Posts: 1,156
Originally Posted by David Mo View Post
I do not know anyone who claims that a thermostat decides something. But everybody, except you perhaps, says that Peter decided to go to the theater yesterday.
Typical usage: "I set the thermostat to 70 ten minutes ago, but for some reason it only decided to actually turn on just now."
SOdhner is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 10th October 2017, 10:17 PM   #151
David Mo
Graduate Poster
 
David Mo's Avatar
 
Join Date: Aug 2012
Posts: 1,879
Originally Posted by SOdhner View Post
Typical usage: "I set the thermostat to 70 ten minutes ago, but for some reason it only decided to actually turn on just now."
This is a typical metaphorical language. "The attribution of a personal nature or human characteristics to something non-human, or the representation of an abstract quality in human form". Example: "The stars danced playfully in the moonlit sky". Or: "The wind sang through the meadow."
David Mo is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 10th October 2017, 11:15 PM   #152
David Mo
Graduate Poster
 
David Mo's Avatar
 
Join Date: Aug 2012
Posts: 1,879
Originally Posted by wea View Post
Complexity. Several orders of magnitude. I can trace a trajectory for the center of mass of a body, whose initial conditions are well known, moving in a simple gravitational field. With a comparable effort, I can modify animal behaviour, for instance have a mouse turn by optogenetic stimulation of just a few neurons. I can't predict Daisy's absence as I can't predict (but statistically) if/when two specific molecules of air in this room will ever meet; in the second case the principle of parsimony suggests it's (yet) just too complex to track every single particle, why should I resort to another entity for Daisy?
Daisy is not a molecule. A simple similarity is not an evidence. I asked for an evidence on human behaviour not electrons.

Originally Posted by jrhowell View Post
I will do so.
Like a thermostat, our decisions are a product of our construction and history. The root causes are outside of ourselves and materialistically determined. The decision making process is far more complex in humans, but there is an essential sameness at the core. The main difference is that the decision making process of a thermostat is easily understandable while that of a human is so complex as to be inscrutable.
Added: Even a basic mechanical thermostat does not just turn on below a certain temperature and turn off above it. Most contain a "heat anticipator" that turns it off before the target temperature is reached to compensate for the expected additional temperature rise afterward.
If you say that something is not understandable you cannot say that you understand that this is like other thing that you understand well. Your example of a “heat anticipator” is not a an example of decision, but a mechanism included in other mechanism.
All your comment is a statement of your beliefs, not an evidence of it.

I would like to know where is said that a thermostat decides to stop. Any quotation? Not ironical or poetic personifications, please.

Originally Posted by Myriad View Post
Everybody says that, yes, as do I. Just like everybody says the sun rises, and even professional astronomers say the constellations rotate around the celestial poles.
Everybody? Who is everybody? Can you be more precise? Are you including poetical personifications?

Originally Posted by Myriad View Post
Again, what we say, how we describe things, does not dictate reality.
Of course.

Originally Posted by Myriad View Post
The distinction I'm making isn't between absolute predictability and absolute unpredictability; it's between computationally reducible predictions (which often take the form of solutions to closed-form equations, and which were the basis of most scientific modeling for several centuries up until just recently), and computationally irreducible predictions.

We don't get to know that. Our narratives are models. Even the most rigorous science doesn't ask whether models are true or not; only whether or not they make successful predictions, and whether or not there's a better model.
It depend of what concept of truth you uses. Prediction is a criterion of truth among others. If you are not a total relativist you have to recognize that there are some models more adjusted to reality than others.

Originally Posted by Myriad View Post
There is plenty of evidence that our subjective narratives of how ideation comes about in our minds is an unreliable model of the actual process. Examples range from commonplace experience, such as people who don't believe they are influenced by advertising when research demonstrates they are influenced by advertising, to extreme cases such as split-brain experiments demonstrating that the speaking brain will generate arbitrary explanations for behavior that's actually caused more directly by a stimulus the consciousness doesn't perceive.
The example that you provide is irrelevant because the subject is anomalous: a man with a split brain.

Notwithstanding:
The diverse kinds of unconscious stimulation of human behavior have been well studied for a long time. Subliminal stimuli in advertisements and political propaganda -that you mention- is a very popular example, but only one among others. But that does not imply that any conscious description of our behavior is wrong. First of all because a description is not an explanation. The mistake happens at the level of the explanation of causes, not the description of behavioural facts. Secondly, because certain behaviours can be explained by unconscious stimulations, but many others not. The debate on the free decisions aims toward the last. Thirdly, a decision made for unconscious reasons is still a decision. What differentiates a determined decision from a free decision is the difference between causes and reasons.

At last, I would be grateful if you would answer my questions rather than to launch new theories. Thank you.

Last edited by David Mo; 10th October 2017 at 11:17 PM.
David Mo is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 11th October 2017, 02:59 AM   #153
wea
Critical Thinker
 
wea's Avatar
 
Join Date: Mar 2015
Location: EU
Posts: 348
Originally Posted by JoeBentley View Post
Until someone provides a workable definition of "Free Will" that isn't just saying "There's a magic air gap in our mental processing between cause and effect" this is all pointless.

I spent the last 5 years of my life in a thread on this board where someone tried to talk about a soul without mentioning the word, this is just more of the same.
I guess I know how you feel
wea is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 11th October 2017, 07:14 AM   #154
SOdhner
Graduate Poster
 
Join Date: Apr 2010
Location: Arizona
Posts: 1,156
Originally Posted by David Mo View Post
This is a typical metaphorical language.
Right. So now the question is, what's the distinction that makes it non-metaphorical for humans to "decide" something? Where do you draw the line?

Do dogs make decisions? What about fish? Do bugs make decisions? Bacteria?

I don't actually have an opinion either way, honestly, but I think it's a valid question. Our brains are messy and complicated so it's hard to say exactly what's going on, but I could see the argument that when we "decide" to do something it's just a *much* more complex version of, say, a thermostat "deciding" to kick on.

I guess it mainly depends on if you can get everyone to agree on a really good definition for "decide" as it applies to the actual process that takes place.
SOdhner is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 11th October 2017, 01:41 PM   #155
JoeBentley
Self Employed
Remittance Man
 
JoeBentley's Avatar
 
Join Date: Nov 2009
Location: Jacksonville, FL
Posts: 7,775
We need a way to codify "You know what I'm talking about, stop pretending you don't for affect" into philosophically acceptable language.

I've brought this up multiple times in various "philosophical" discussion but our language developed (in a very haphazard, disorganized way) so people can convey actions, thoughts, ideas, and emotions to each other on a practical, non-esoteric, day to day level.

And on that level yes people make decisions. If Bill asks Ted what he wants for lunch and Ted goes "A grilled cheese sandwich" Ted has made a decision as to what to eat for lunch on that level.

So much of philosophy is just manufactured self important hand wringing over the fact that our language pretty much only works on that every day, basic human interaction level.

Look at it this way. You're driving somewhere you've never been before and a friend is in the front passenger seat giving your directions.

You come to an intersection and your friend says "Turn left here." Now do you freeze in place, confused an unable to proceed because you don't understand that the simple phrase "Turn left here" means that you are to, at the intersection when the appropriate light/road signage allows rotate the steering wheel of the car counterclockwise thus causing the angles of the front wheels of the vehicle to alter their orientation causing the vehicle to turn left onto the new direction of road or whether your friend wants you to just angle your body left in your seat?

No. You don't. Because you understand the context that you're driving in a vehicle taking vehicular directions from someone. It's safe to assume that you're going to assume directions are for the vehicle.

"Decision" as a mental process really only applies to everyday human interactions. That doesn't make it an invalid or "wrong" concept no more so than my mortgage payment isn't going to factored into the Unified Theory of Everything.
__________________
"Ernest Hemingway once wrote that the world is a fine place and worth fighting for. I agree with the second part." - Detective Sommerset, Se7en

"Hating a bad thing does not make you good." - David Wong
JoeBentley is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 11th October 2017, 06:15 PM   #156
barehl
Master Poster
 
barehl's Avatar
 
Join Date: Jul 2013
Posts: 2,500
Originally Posted by SOdhner View Post
where do you draw the line between "human" and "human-like" consciousness?
I would say that human means acting as a human would act whereas human-like would not be. I don't see any reason to create a separate category for non-biological.

Quote:
Neanderthals were extremely close, as far as we can tell.
Not that I've been able to tell, not unless you have a very liberal definition of close. There is no evidence that Neanderthals ever had writing or art. Their most complex weapon seemed to be a hand held spear. The body form suggests a species adapted to dangerous physical activities rather than being smart enough to avoid them. And there is evidence that the teeth were frequently used as tools. Male orangutans do that; humans don't.

Quote:
Right, agreed. All I was saying is that since the mechanics of the consciousness would be totally different from how it works in humans, I would personally use the term "human-like" rather than "human" because when I say "human consciousness" I'm referring to the whole thing, not just the superficial outward appearance.
Okay. They used to have the category of great apes which included all larger apes accept humans. The problem was that they couldn't explain why humans were excluded. I've never talked about anything superficial. A funny car superficially resembles a production car. A non-biological consciousness that could pass for human would have to be much closer than superficial.

Quote:
The human brain is complicated, inefficient
Inefficient in what way?

Quote:
It works in ways that make it really bad at a lot of the things we would want an artificial consciousness to do.
Like what?

Quote:
While someone, someday, may try to make an artificial brain to prove that they can it would be an enormous amount of work for a poor payoff.
I've already said this several times. The best estimate that I've come up with is that after publication of a general theory, it would take about six years and half a billion dollars. And that would give you a consciousness about the same as a human. Most seem to make additional assumptions. For example, some assume that simply speeding this device up would then give you super intelligence. No. Others assume that the behavior could be locked as Asimov did with his three laws. No.

Quote:
When (if) we see artificial consciousness it will be wildly different from human consciousness by design.
I'm not sure what you mean by wildly different.

Quote:
I'd say you're lacking in imagination.
I don't deal in imagination or fantasy or fairy tales or wishful thinking. I deal with what is possible.
barehl is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 11th October 2017, 06:35 PM   #157
barehl
Master Poster
 
barehl's Avatar
 
Join Date: Jul 2013
Posts: 2,500
Originally Posted by Myriad View Post
So you've repeatedly claimed, but I haven't seen a persuasive argument.
I'm sorry; I don't do arguments, at least not on this topic. I've given some consideration to a disproof. I think such a disproof could be made based on information coherence. Then it occurred to me that the simplest falsification of such a conjecture would be a learning system that was parsimonious, coherent, and undirected. That would prove my conjecture dead wrong.

So, what am I talking about? When you try to do learning systems you run into problems. One problem is information loss where new learning distorts or erases what you already learned. This doesn't happen with the brain. Another problem is retention of extraneous information. If you do this then the information set in your system will blow up and become unusable. The brain doesn't do this. Another problem is being able to pick out novel relationships, sets, and patterns without being explicitly directed. Yes, directed learning does take place in humans; this is what formal education is. But, humans learn a great deal without this type of direction. So, this is what you need to get something in the same class as human or even equal to most mammals. I read the Deep Mind paper which as far as I am aware is absolute, cutting edge. It's not there. So, I guess I'll have to continue working on the disproof.

Quote:
so I prefer the more general term "computation" (which subsumes the evaluation of equations and the execution of algorithms) when the latter type of predictability (which Wolfram terms "computationally irreducible") is likely to be in play.
That's okay with me.

Quote:
It's a computation.
The brain isn't based on computation.

Quote:
Narrative understanding is the summarizing of information in memory and sensory input into a narrative of things and agents acting in the world.
You're talking about abstract understanding. I don't use the term narrative for that because then it sounds language based when it isn't.

Last edited by barehl; 11th October 2017 at 07:30 PM.
barehl is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 11th October 2017, 06:41 PM   #158
barehl
Master Poster
 
barehl's Avatar
 
Join Date: Jul 2013
Posts: 2,500
Originally Posted by Myriad View Post
A disproof of Dennett's multiple drafts model that passed peer review would certainly be a publishable paper.
I guess it will be a small section in one chapter.

Quote:
But the colloquial sense of the word "wet" doesn't refer to that; it refers to the experience of touching and interacting with water, feeling its viscosity and its evaporative cooling, and experiencing its secondary effects (such as how the dog smells).

"Wet" in that sense is, yes you guessed it, like all so-called qualia, an element of narrative. It's not reality. Nothing would be wet (in that sense) if there were no conscious brains around to experience it.
Yes, Mary's Room. This isn't new.

Quote:
But we make a distinction between that usage and "real" human decisions that involve subjectively difficult cognition. My point is that those "real" human decisions are, like the wetness of water or the simpler decisions of simpler machines, just narrative descriptions, not actual mechanics.
I already know how to do decisions. That's not an issue.
barehl is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 11th October 2017, 06:49 PM   #159
barehl
Master Poster
 
barehl's Avatar
 
Join Date: Jul 2013
Posts: 2,500
Originally Posted by wea View Post
I guess I know how you feel
I view free will as being related to volition which is part of the brain. I don't see it as related to a soul or spirit or everlasting smoke.
barehl is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 11th October 2017, 06:53 PM   #160
barehl
Master Poster
 
barehl's Avatar
 
Join Date: Jul 2013
Posts: 2,500
Originally Posted by JoeBentley View Post
So much of philosophy is just manufactured self important hand wringing over the fact that our language pretty much only works on that every day, basic human interaction level.
That's why I don't do philosophy. I'm not really interested in philosophy.
barehl is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Reply

International Skeptics Forum » General Topics » Religion and Philosophy

Bookmarks

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump


All times are GMT -7. The time now is 08:39 PM.
Powered by vBulletin. Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
© 2014, TribeTech AB. All Rights Reserved.
This forum began as part of the James Randi Education Foundation (JREF). However, the forum now exists as
an independent entity with no affiliation with or endorsement by the JREF, including the section in reference to "JREF" topics.

Disclaimer: Messages posted in the Forum are solely the opinion of their authors.