|
Welcome to the International Skeptics Forum, where we discuss skepticism, critical thinking, the paranormal and science in a friendly but lively way. You are currently viewing the forum as a guest, which means you are missing out on discussing matters that are of interest to you. Please consider registering so you can gain full use of the forum features and interact with other Members. Registration is simple, fast and free! Click here to register today. |
Tags | artificial intelligence , consciousness |
View Poll Results: Is consciousness physical or metaphysical? |
Consciousness is a kind of data processing and the brain is a machine that can be replicated in other substrates, such as general purpose computers. | 81 | 86.17% | |
Consciousness requires a second substance outside the physical material world, currently undetectable by scientific instruments | 3 | 3.19% | |
On Planet X, unconscious biological beings have perfected conscious machines | 10 | 10.64% | |
Voters: 94. You may not vote on this poll |
8th May 2012, 01:29 PM | #321 |
Banned
Join Date: Oct 2007
Posts: 20,121
|
I totally dig your concerns and observations. For me and my various rage, its never been a question of science vs religion. its always been about science, and how can it become more sane and ethical.
of course, there are movements within science that lean this way. Some of the most respected scientists would have very little issue with my spewage on this matter. Marrying ethics to science is a slippery slope, but the opposite (marrying science to business) turns out a lot of useless crap. Or worse, harmful crap. But fear not, my blows against the empire are pathetically ineffective. My personal history, of course, flavors my attitude. I realize such anecdotes don't belong here, but here's one anyway: My son in law, whom is a brilliant and highly sought after chemical engineer, was actually working for Bechtel, when they decided to privatize water in Bolivia. He met my daughter when she was working for those exploited peasants. She's not a scientist. He was brilliant enough to catch the wave of her innate wisdom. |
8th May 2012, 02:06 PM | #322 |
Philosopher
Join Date: Sep 2006
Posts: 6,985
|
|
8th May 2012, 03:52 PM | #323 |
Philosopher
Join Date: Jun 2005
Posts: 6,946
|
The presentations of page 26 and 50 in this symposium are some fairly simple ones from over half a decade ago: http://sacral.c.u-tokyo.ac.jp/pdf/Ik...sness_2005.pdf
Conscious behaviors in the sense of rehearsing possible actions in their mind using the same neural network pathways as actual perception, AKA imagination. |
8th May 2012, 06:15 PM | #324 |
Banned
Join Date: Dec 2007
Posts: 5,211
|
|
8th May 2012, 06:36 PM | #325 |
Banned
Join Date: Dec 2007
Posts: 5,211
|
"4 Experimental Results The implemented system currently runs on a 2.5 GHz Pentium 4 machine" [Bolding added] Really? That first paper (p26) was well worded and written, theoretically fine, but totally lacked any sort of in depth analysis for me to comment on. They didn't even include any of the source code they used for the 'robot' in question for me to analyse. Again all I see with these artificial neural network based algorithms are people trying to model real biological neural networks with abstract models of information processing, which although may prove useful in a scientific sense still lack any sort of consciousness, any more than a cleverly coded super computer does. |
8th May 2012, 06:45 PM | #326 |
Banned
Join Date: Dec 2007
Posts: 5,211
|
Why do you say demons? I never mentioned anything even remotely relevant to that reference. If you want to discuss the problems with string theory that's fine, but try to keep it to a relevant thread and not hijack this one. Or just buy this http://www.amazon.com/The-Trouble-Wi.../dp/0618551050 |
8th May 2012, 09:54 PM | #327 |
Banned
Join Date: Oct 2007
Posts: 20,121
|
Its kind of cool how threads about consciousness are so easily hi-jacked and de-railed.
Its pretty hard to be off-topic. Which is why I'd like to see kittens. |
9th May 2012, 08:04 AM | #329 |
Philosopher
Join Date: Jun 2010
Posts: 9,800
|
|
9th May 2012, 10:52 AM | #330 |
Banned
Join Date: Oct 2007
Posts: 20,121
|
funny about language...
What do you mean by "...in mind."? Its hard to discuss a background sort of consciousness without sounding all wooed-out or religious fundamentalist. I'm neither; more of a fun, mental sort... I'd try to give a go at it, though its a very strenuous hypothesis, and past attempts have garnered some mockery and nastiness. But, I'm still here. My 'single quark" hypothesis is exhausting to my brain. Maybe I'll give it a shot, after chores. |
9th May 2012, 10:53 AM | #331 |
Philosopher
Join Date: Jun 2005
Posts: 6,946
|
Why do you think this? Let me phrase the question in another way:
Why do you think the causal sequences of node activation in an artificial neural network different than the causal sequences of node activation in a biological neural network? The essential property of a neural network is that one neuron's output leads to a change in the behavior of neurons downstream. If a conscious behavior arises due to the way a network functions, what difference does it make how or where that network is implemented? As for the first presentation, you don't need source code. It wouldn't make sense even if you saw it, because there is no specific programming done that is relevant to the robot. That isn't how neural networks work. They trained the robot so that its goal is focusing on blue, and they trained the robot that when it turned one way it saw blue, and not blue if it turned the other way. They did *not* tell the robot to turn -- ever. What the robot did was imagine the act of turning in either direction, and imagining a left turn led to the imagination of a blue percept, which caused the robot to then *want* to turn left -- it effectively decided to turn left because it imagined that if it turned left it would see blue, and seeing blue is its goal. And this was all done with trained neural networks -- no hard coding of any behavior. This is true imagination, by any possible definition of the term. Genuine, real, authentic imagination. And imagination is one of the behaviors we normally attribute to consciousness. Yeah the robot can't write poetry, or even play Jeopardy as well as Watson, but Watson doesn't *imagine* things like we do. This robot did. |
9th May 2012, 10:54 AM | #332 |
Banned
Join Date: Oct 2007
Posts: 20,121
|
|
9th May 2012, 11:02 PM | #333 |
Under the Amazing One's Wing
Join Date: Nov 2005
Posts: 2,546
|
I was just thinking about how the first electronic computer, the ENIAC, was developed to feel the future, and it did so successfully.
It was developed to calculate artillery trajectories. In other words, given a specific weight and size of bullet, amount of gunpowder, the gun angle, wind speed and angle, air temperature and humidity, where will the bullet land? Feeling the future is a fundamental application of mathematics and computers. Great things are happening as computers do it more and more like our brains do. |
9th May 2012, 11:26 PM | #334 |
Under the Amazing One's Wing
Join Date: Nov 2005
Posts: 2,546
|
Success at fulfillment perhaps?
"Follow your heart (gut, feelings, do what you love, etc.)" is too often bad advice for being successful in fulfillment or any other endeavor, because the "heart" (emotional part of the brain that purportedly feels the future) evolved through chaotic evolutionary processes only guaranteed, in the past, to have endowed enough success to proliferation of the genes responsible. |
11th May 2012, 09:54 AM | #335 |
Philosopher
Join Date: Jun 2005
Posts: 6,946
|
This is pretty interesting:
http://www.iis.ee.ic.ac.uk/yiannis/DemirisJohnson03.pdf Although this is almost 10 years old, it talks about a very important mechanism in biological brains -- re-using the same circuitry for both action and planning, and in some cases observation and learning ( both of those are supersets of planning, though ). The basic idea is that the circuitry of the motor cortex is used not only for controlling muscles and decoding muscle position, but also for simulating the control of muscles and the effects of that control, I.E. imagining movement. And furthermore, that the imagining of movement is used during learning I.E. "if I do this, and my arm then moves up like so, I will be in the right position." In this case the researchers use a sequence something like this: 1) robot observes goal configuration of arm, on another robot 2) the code modules that plan the robots arm movement are re-routed to internal locations ( they no longer control the arm, rather their output goes back into the robot brain ) 3) those modules then control simulations of the arm I.E. if one would "raise" the arm the arm isn't actually "raised" yet portions of the robot brain are activated as if it was ( for example, imagine raising your arm -- you can also imagine what your arm feels like in the raised position ) 4) the results of those simulations are evaluated to see if any of them bring the arm closer to the goal configuration 5) the simulated movements that are rated the best are reinforced, and more iterations of imagination are performed 6) eventually a sequence of movements that the robot imagined would put it in the goal configuration is found, and the arm control modules are re-routed back to the real arms 7) the action is performed This seems very convoluted, but it is important to realize that this is the exact mechanism by which animals not only plan movements but also learn movements from observing others. In this case a neural network was not used, but the high level information flow is nevertheless the same ( or at least very similar ). In a previous post I linked to some research that is like this but in that other case they actually *did* use neural networks. Who said we don't know much about consciousness? |
11th May 2012, 04:00 PM | #336 |
Under the Amazing One's Wing
Join Date: Nov 2005
Posts: 2,546
|
Yes, we know a LOT about consciousness already, and what you're describing is a type of "feeling the future."
What's cool is that when we rehearse movements in our minds, our minds are actually moving the limbs, but inhibitory impulses prevent the muscles from physically moving. When I think really deeply about playing piano, sometimes my fingers come to life and start to weakly play the notes in the air. Maybe it's because the inhibitory neurons become exhausted. |
11th May 2012, 04:18 PM | #337 | |||
Under the Amazing One's Wing
Join Date: Nov 2005
Posts: 2,546
|
The Chinese Room Thought Experiment
When I first heard this thought experiment I was really intrigued. It seemed persuasive and made the hard problem of consciousness very tangible.
Now, I find the Chinese Room idea stupid. Searle is a smart guy, so why does he (and so many others) find it so compelling? Would someone explain to me why it's important or persuasive? Chinese Room on Wiki Video demo of the Chinese Room starts at 16:45 in this cool BBC program, "The Hunt for AI."
|
|||
11th May 2012, 10:25 PM | #338 |
Philosopher
Join Date: Jun 2005
Posts: 6,946
|
Yeah I have known about that first pathway for awhile. What I realized just recently, which is mentioned in the research, is the idea that not only are the outgoing motor impulses inhibited, but they also lead to the same downstream effects as incoming sensory percepts.
Apparently the motor networks are always recurrent and there is always a model of the results being generated from any outgoing movement signals, we just don't notice it because usually the real thing happens and the sensory percepts from actually moving a limb trump those from imagining moving a limb. Only when the real thing is inhibited do the simulated results become apparent. This also nicely explains why a deviation from the expected is such an attention-getter for a conscious animal -- if the results of the model and the results of reality don't match up it would be trivial for a network to see it, especially since both results will arrive in the same location at approximately the same time. Fascinating. I wonder if we can start a sticky thread about "consciousness: the facts" |
11th May 2012, 10:41 PM | #339 |
Philosopher
Join Date: Jun 2005
Posts: 6,946
|
No, because it isn't.
Searle formulated it in just about the stupidest way possible. He did that on purpose. He doesn't want people actually thinking about the issue, he wants them to be blinded with emotion and just give up. Case in point -- why Chinese and not English? Why a man in the room, and not a robot? Why a room, and not the brain of a giant? The whole thing is absurd. |
12th May 2012, 01:03 AM | #340 |
Philosopher
Join Date: Jun 2010
Posts: 9,800
|
Natural language translation is a hard problem. We're talking "hard" with a capital Nobel.
But like many hard problems, it's theoretically possible to brute force it. To just make a giant-ass lookup table covering every possible circumstance. That's all Google does today, really. You'd be surprised how few unique phrases there actually are. Incidentally, computer scientists around the time of this argument (1980) were all really excited about the possibility of making giant-ass lookup tables for absolutely everything. They called it "expert systems." These computer scientists argued that a computer armed with enough of these lookup tables was intelligent. Not "indistinguishable from," not "might as well be considered," was. A computer with a sufficiently large Chinese-English dictionary would know how to translate between them. But hold on, Searle said. Let's give this giant-ass lookup table to some jackass in a room instead. He don't know Chinese. He ain't gonna learn Chinese, not when he just looks up sentence indexes. He doesn't understand what you're asking him. Look at him, he gets paid to sit in a dark room and do whatever was the 1980 equivalent of filling out captchas all day. So whatever we're looking for with this whole "intelligence" thing, whatever Derpy McBlackbox over there has that my pocket calculator don't, the dictionary alone doesn't have it either. Moreover, this is a general problem. Just because you have a big enough index to answer every question doesn't mean you can call it "thinking." |
12th May 2012, 01:09 AM | #341 |
Penultimate Amazing
Join Date: Feb 2005
Location: Shanghai
Posts: 16,041
|
I don't see how this follows: the fact that one part of the machine can't be said to be intelligent doesn't mean that the machine as a whole isn't. And the fact that one part of the machine doesn't understand chinese doesn't mean that the machine as a whole doesn't.
I figure as long as it displays intelligent behavior, it's intelligent. I don't really understand what else "intelligent" could mean. |
__________________
"... when people thought the Earth was flat, they were wrong. When people thought the Earth was spherical they were wrong. But if you think that thinking the Earth is spherical is just as wrong as thinking the Earth is flat, then your view is wronger than both of them put together." Isaac Asimov |
|
12th May 2012, 05:35 AM | #342 |
Illuminator
Join Date: Jul 2009
Posts: 3,874
|
|
__________________
"Anyway, why is a finely-engineered machine of wire and silicon less likely to be conscious than two pounds of warm meat?" Pixy Misa "We live in a world of more and more information and less and less meaning" Jean Baudrillard http://bokashiworld.wordpress.com/ |
|
12th May 2012, 06:28 AM | #343 |
Penultimate Amazing
Join Date: Feb 2005
Location: Shanghai
Posts: 16,041
|
|
__________________
"... when people thought the Earth was flat, they were wrong. When people thought the Earth was spherical they were wrong. But if you think that thinking the Earth is spherical is just as wrong as thinking the Earth is flat, then your view is wronger than both of them put together." Isaac Asimov |
|
12th May 2012, 11:29 AM | #344 |
Illuminator
Join Date: Jul 2009
Posts: 3,874
|
|
__________________
"Anyway, why is a finely-engineered machine of wire and silicon less likely to be conscious than two pounds of warm meat?" Pixy Misa "We live in a world of more and more information and less and less meaning" Jean Baudrillard http://bokashiworld.wordpress.com/ |
|
12th May 2012, 11:58 AM | #345 |
Philosopher
Join Date: Jun 2010
Posts: 9,800
|
Robustness. Ask the room something not in the phrasebook but which it can answer, a differently worded question perhaps. A strong AI which understands Chinese could answer you anyway. Weak AI, using the lookup table alone, could not. Both are common definitions of the word "intelligence," which previously had been far from clear.
I should probably add here that I don't actually support the Chinese Room argument. It's wrong. Not because of any semantic foolishness, but because he assumes the operator (human or machine) has no capacity to learn the semantics of the symbols it manipulates. This was a perfectly fair assumption for its time, because people were generally arguing such a learning capacity would not be needed. Add in that capability, though, and with time and practice you end up with an agent with some fragmentary shard of strong AI. It may not know any of the concepts the questions or answers refer to, but it truly understands how the one should map to the other. They're all wrong. The word is a catch-all term for a large variety of behavioral and information processing steps, and these days is increasingly hijacked by people trying to push a "humans are special" agenda. It's almost as bad as "consciousness." |
12th May 2012, 02:31 PM | #346 |
Under the Amazing One's Wing
Join Date: Nov 2005
Posts: 2,546
|
Chess and the Chinese Room
A few years ago I was into playing chess on Yahoo. You set up a board and wait for a human opponent of similar rank to accept your game, and away you go.
Then one day, something disturbing happened. I was kicking someone's ass, and instantly after I won a piece, he started to play absolutely perfectly and and in very few moves destroyed me. I felt pretty sure that he was playing himself until I started to kill him, then started using a computer. I think he just didn't want to fall in the rankings. The interesting things is the magic bean of my opponent's personality opponent went away, and I noticed it instantly. Something like playing tug of war with a person, feeling his living muscles through the rope, then it getting hitched to a bulldozer and getting pulled into the mud in one mechanical stroke. Or, if there was a person who knew only a little Chinese in the room, then when they had to respond in a way over their heads, they switched to the book, complied by experts. ...but chess is not a look up table task for AI. There are too many possibilities. The table would have to be as big as the universe or something like that. I've worked on look-ahead games, and made one that had no such table. It "felt the future" by imagining every possible move its opponent might make, it's possible answers, etc. I also added emotion to it -- it put up a happy face when it expected a win, and a sad face when it saw it was losing. Unlike us, it didn't let it's emotions interfere with its intelligence. |
12th May 2012, 02:41 PM | #347 |
Under the Amazing One's Wing
Join Date: Nov 2005
Posts: 2,546
|
Ah, I didn't know that, and it didn't seem like the narrator of the BBC show understood that either. Next time I watch it I'll see if I missed it.
(Chinese because it's often an example of a language that's so extremely cryptic to westerners. A man in a room because it brings home the point that the man has no understanding of the meaning of the messages he's transcribing. His magic bean of understanding is never engaged, yet the one outside the room feels it is.) So, Searle was arguing that the Chinese Room, like expert systems, did not understand the subject, but were playing back the understanding of the experts that created the table. Funny how so many people misunderstand its point, like the point of Schrodinger's Cat. |
12th May 2012, 04:30 PM | #348 |
Philosopher
Join Date: Jun 2010
Posts: 9,800
|
|
12th May 2012, 05:27 PM | #349 |
Penultimate Amazing
Join Date: Feb 2005
Location: Shanghai
Posts: 16,041
|
|
__________________
"... when people thought the Earth was flat, they were wrong. When people thought the Earth was spherical they were wrong. But if you think that thinking the Earth is spherical is just as wrong as thinking the Earth is flat, then your view is wronger than both of them put together." Isaac Asimov |
|
12th May 2012, 09:32 PM | #350 |
Under the Amazing One's Wing
Join Date: Nov 2005
Posts: 2,546
|
From Number of possible chess games:
Quote:
But, whatever the number, lookup-table implementation is not feasible for chess playing machines, which need to feel the future to play well. |
12th May 2012, 10:13 PM | #351 |
Philosopher
Join Date: Jun 2005
Posts: 6,946
|
The problem is the thought experiment is using absurdness to extinguish absurdness.
It is absurd to think that a giant lookup table is relevant to *anything* when it comes to intelligence because by definition we consider intelligence the ability to do something other than reference pre-defined behavioral reactions. The proper counter to this stupid argument by the old computer scientists is to just point out that they are idiots. Not formulate an even more bizarre scenario that is so unclear that every armchair philosopher on the internet has spun it into supporting their own uneducated opinions. |
12th May 2012, 10:22 PM | #352 |
Philosopher
Join Date: Jun 2005
Posts: 6,946
|
Heh, I just made that up. That is my own interpretation, based on the fact that I could argue why a lookup table is not equivalent to intelligence without referencing absurd scenarios, and it would be far more clear to everyone.
Hence, there must have been an ulterior motive, I tell myself. I am wary of any philosopher interested in consciousness and cognition who doesn't immerse themselves in programming, it seems very non-genuine. And Searle, like Penrose, is that type. ( Penrose isn't a philosopher, but he isn't a programmer either, so any notion he has about what an algorithm can or cannot do is amateur, and that is why I don't respect him at all when it comes to this issue ). Note that I feel sort of the same way about all these types, regardless of which side they support. Dennet, Blackmore, etc. I can't stand listening to people quote Daniel Dennet or Susan Blackmore talking about how little we really know when it comes to consciousness, and saying "see they are even supportes of the computational model and they admit that we don't know much." Yeah but here is the thing -- was Searle clear that the instructions the guy in the room follows are merely some implementation of a lookup table? I don't recall that being explicitly part of the description, and if they are, he hasn't done a good job squashing all the bad versions of the chinese room that are crawling around. Because all I ever hear from armchair philosophers is that the chinese room is supposed to show that *any* mechanical instructions the guy follows somehow invalidate any possible understanding of chinese that the room might have. In other words, I see the most common interpretation to be a suggestion that the idea of machine consciousness is absurd. But you and I and anyone who thinks about it knows this isn't the case -- if the instructions on the cards represent something more like CPU instructions and register values, meaning the guy is actually just implementing an algorithm that could be anything, it is less clear cut that the idea of the room understanding chinese is absurd. And if the instructions on the cards represent something like a neural network simulation, then it isn't clear at all that the room doesn't understand chinese. In that case it seems like the room *does* understand chinese. This is just one of those cases -- like every other case in this discussion, actually -- where incorrectness stems primarily from a failure of being specific when it comes to what we are talking about. |
12th May 2012, 10:34 PM | #353 |
Philosopher
Join Date: Jun 2005
Posts: 6,946
|
Yes, and actually the best chess engines use a huge amount of lookup table references in their logic. They call it "endgame tablebase" analysis.
However, that is *not* thinking. It is no different than you turning left out of your driveway because you are used to it. At some point, when you first bought your house, you had to *think* about which direction to turn, and the same at the next turn, etc, when you went to work in the morning. But after awhile it is burned into your memory, and you just do it without thinking. It is also worth noting that endgame tablebases don't help win in games that aren't constrained by artificial rules, and they matter less and less in games with less artificial rules. They also don't really help that much in games where the tables can turn rapidly towards the end. |
13th May 2012, 01:23 AM | #354 |
Illuminator
Join Date: Jul 2009
Posts: 3,874
|
|
__________________
"Anyway, why is a finely-engineered machine of wire and silicon less likely to be conscious than two pounds of warm meat?" Pixy Misa "We live in a world of more and more information and less and less meaning" Jean Baudrillard http://bokashiworld.wordpress.com/ |
|
13th May 2012, 01:26 AM | #355 |
Illuminator
Join Date: Jul 2009
Posts: 3,874
|
|
__________________
"Anyway, why is a finely-engineered machine of wire and silicon less likely to be conscious than two pounds of warm meat?" Pixy Misa "We live in a world of more and more information and less and less meaning" Jean Baudrillard http://bokashiworld.wordpress.com/ |
|
13th May 2012, 01:50 AM | #356 |
Illuminator
Join Date: Jul 2009
Posts: 3,874
|
No Dodger, if you want to study Consciousness you need to study human behavior. Reducing human behavior to neuron behavior and then trying to build models of neuron behavior which becomes human behavior is useless unless we know what human behavior is.
Your continually making the false assumption that since humans brains are built of neurons if we study the behavior of neurons we will be able to create brains. It may be the way you right comp. games, by building models from basic logical procedures, but it is useless if you don't know what the model is supposed to model. Taking the PM approach of defining a complex human behavior such as consciousness as a simple behavior may make the idea of modeling from basic switch behavior easier, but that's irrelevant if we have yet to define the behaviors which make up consciousness. An economist may define a human as a unit with x spending power for their economic model, but this definition is useless for a doctor who is modeling the spread of TB in a population. Again, if your selling games to children and you want them to be convinced the behavior they are seeing is "real" then your skill relates to there ability to be fooled. Attempting to get everyone to accept a limited definition of consciousness so that they can be fooled into believing your programming leads to consciousness is not exactly scientific. The idea of getting everyone to learn programming so they also learn how to trick people and become convinced that tricking people is the way the real world works is also not scientific. The agenda amongst computationalists is clearly to justify there ability to trick people by claiming that's how the real world also works. Remind you of priests anyone? |
__________________
"Anyway, why is a finely-engineered machine of wire and silicon less likely to be conscious than two pounds of warm meat?" Pixy Misa "We live in a world of more and more information and less and less meaning" Jean Baudrillard http://bokashiworld.wordpress.com/ |
|
13th May 2012, 04:23 AM | #357 |
Penultimate Amazing
Join Date: Feb 2005
Location: Shanghai
Posts: 16,041
|
|
__________________
"... when people thought the Earth was flat, they were wrong. When people thought the Earth was spherical they were wrong. But if you think that thinking the Earth is spherical is just as wrong as thinking the Earth is flat, then your view is wronger than both of them put together." Isaac Asimov |
|
13th May 2012, 11:46 AM | #358 |
Philosopher
Join Date: Jun 2005
Posts: 6,946
|
I completely agree.
What is the issue you are complaining about? I am not pixy, and neither are any of the very smart people doing research on machine consciousness. Understanding human behavior is the first step in all of the research that I familiarize myself with. For example, in the paper I just discussed with mr. scott a few posts ago, the research was done according to known information about primate behavior, namely the way we plan and initiate movements in the context of learning by imitation. Furthermore the information includes things like MRI results so it isn't just pie in the sky either. This is very factual stuff. That isn't the idea. The idea is to get everyone to learn programming because it is almost unique among human endeavors in that it *forces* the practitioner to think logically about something in order to see any results at all. And it is certainly the *only* such endeavor, from the already small set, that is so easily accessible to anyone -- anyone with a computer can start since there are thousands of free compilers and interpreters for whatever language one cares to use. The fact is, computer science is really about wrapping your brain around algorithms, which are just sequences of events. It is about seeing how to get from point A to point B in reality, a skill far too few people have learned. I wish more scientists of all types were familiar with that skillset, I think the world would progress much faster. I can't tell you how many biology grad students I have worked with when I was a lab assistant who spent far too much effort trying to figure out why this or that cellular process or pathway worked the way it did when if they had taken some courses on programming they might easily see how the steps of the process fit together to lead to the results they were seeing. So why should cognition be any different? It shouldn't. Our brains are made of stuff that behaves according to the laws of nature, and to figure out the ways that stuff might do stuff that leads to things like me typing a response to you simply requires an understanding of how sequences of events lead to results. Computer science doesn't have to have anything to do with either computers or science. In fact I wish it wasn't named computer science because it is so misleading. It has to do with the study of step by step processes. The advantage I have over people who don't know how to program is that at this point I have an almost intuitive understanding of how step by step processes might lead to this or that result. If you had the same understanding, we wouldn't even be having this argument, because you would see the whole consciousness issue in a completely different light. |
13th May 2012, 01:58 PM | #359 |
Philosopher
Join Date: Jun 2010
Posts: 9,800
|
Thanks for clarifying. Maybe insulting his audience to their faces would have been a more satisfying response to their assertions, but I doubt it would have had the same impact. Like it or not, his argument was very effective at its intended purpose, and if it's been hijacked these days by true believers, well, so what? They'd have just latched on to something else otherwise.
|
13th May 2012, 03:58 PM | #360 |
Philosopher
Join Date: Mar 2009
Posts: 6,360
|
|
Thread Tools | |
|
|