ISF Logo   IS Forum
Forum Index Register Members List Events Mark Forums Read Help

Go Back   International Skeptics Forum » General Topics » Science, Mathematics, Medicine, and Technology
 

Notices


Welcome to the International Skeptics Forum, where we discuss skepticism, critical thinking, the paranormal and science in a friendly but lively way. You are currently viewing the forum as a guest, which means you are missing out on discussing matters that are of interest to you. Please consider registering so you can gain full use of the forum features and interact with other Members. Registration is simple, fast and free! Click here to register today.
Tags consciousness , robots

Reply
Old 12th August 2009, 03:24 PM   #281
Paul C. Anagnostopoulos
Nap, interrupted.
 
Paul C. Anagnostopoulos's Avatar
 
Join Date: Aug 2001
Posts: 19,143
Originally Posted by shuttIt
But if I'm asleep, surely it would be acceptable usage of the word to say that I am unconscious.
In non-REM sleep, yes. In REM sleep I'd say it was just an altered state of the same conscious processes you exhibit when you're awake.

Quote:
We agree that this can't be done.
Maybe we do. Does John Edward? He claims to talk to disembodied conscious entities all the time. If we could actually locate such an entity, it would disprove the claim that only the brain can generate consciousness.

Quote:
Nowhere is it written that everything has to be accessible to scientific inquiry. It seems plausible to me that consciousness (in the specific sense I mean it), what caused the big bang and the like are unknowable. If we are going to impose the constraint that they ARE knowable and then reason from there, we should at least admit that this is a pragmatic assumption and could be wrong.
Of course it could be wrong, but that's the only approach we have. We just keep hammering away at neuroscience until we feel we understand consciousness or reach a dead end.

However, I know for sure that there are some people who will say that we only ever discover the neural correlates of consciousness, no matter how much progress we make. It's those people who have an unfalsifiable hypothesis. (Not saying you're one of those people.)

Quote:
At the risk of rambling, it just seems to me that the fact that I am not a reasoning meat automata, and do in fact have an inner 'I' is not something that one would have predicted from anything we have learned about physics, chemistry or biology.
Sorry, now you're making an unwarranted assumption. I think you're just a reasoning meat automata. Your inner "I" is a clever illusion pieces together from a hundred different processes because it was evolutionary advantageous to do so.

~~ Paul
__________________
Millions long for immortality who do not know what to do with themselves on a rainy Sunday afternoon. ---Susan Ertz

RIP Mr. Skinny, Tim
Paul C. Anagnostopoulos is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 12th August 2009, 03:24 PM   #282
Piggy
Unlicensed street skeptic
 
Piggy's Avatar
 
Join Date: Mar 2006
Posts: 15,905
Originally Posted by Paul C. Anagnostopoulos View Post
We agree that the mechanism bears no resemblance. I see no reason why both mechanisms cannot produce consciousness, since they are computationally equivalent (modulo any real-time-sensitive processes).
Computation is, of course, an abstract description of what the brain is doing physically.

If we create a physical circuit-brain which can also be described by the same kind of abstractions -- in other words, which is also doing what the brain is doing -- then maybe it can be conscious.

But when we get to the situation of a person with a pencil at a table who is thinking through these abstractions in his head, and using the pencil to help him along, then we're in different territory altogether.

(This is distinct from roger's TM example above)

In that case, there is no substantiation of what the abstractions symbolize.
__________________
.
How can you expect to be rescued if you donít put first things first and act proper?
Piggy is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 12th August 2009, 03:25 PM   #283
Piggy
Unlicensed street skeptic
 
Piggy's Avatar
 
Join Date: Mar 2006
Posts: 15,905
Originally Posted by Paul C. Anagnostopoulos View Post
Sorry, I don't understand why. Perhaps you could try to explain why, rather than simply repeating your assertion.


About which part of what I said?
See above.
__________________
.
How can you expect to be rescued if you donít put first things first and act proper?
Piggy is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 12th August 2009, 03:26 PM   #284
Piggy
Unlicensed street skeptic
 
Piggy's Avatar
 
Join Date: Mar 2006
Posts: 15,905
In short, you can't move into a plat. You can't haul a ton of freight across a river by driving over the blueprint of a bridge.
__________________
.
How can you expect to be rescued if you donít put first things first and act proper?
Piggy is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 12th August 2009, 03:32 PM   #285
Paul C. Anagnostopoulos
Nap, interrupted.
 
Paul C. Anagnostopoulos's Avatar
 
Join Date: Aug 2001
Posts: 19,143
Originally Posted by Piggy
Consciousness arises from the physical activity of the brain.
Ah well, if you are going to define consciousness as going on in a brain, that does eliminate any other form of consciousness.

Why would you do that?

~~ Paul
__________________
Millions long for immortality who do not know what to do with themselves on a rainy Sunday afternoon. ---Susan Ertz

RIP Mr. Skinny, Tim
Paul C. Anagnostopoulos is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 12th August 2009, 03:36 PM   #286
Piggy
Unlicensed street skeptic
 
Piggy's Avatar
 
Join Date: Mar 2006
Posts: 15,905
Originally Posted by Paul C. Anagnostopoulos View Post
Ah well, if you are going to define consciousness as going on in a brain, that does eliminate any other form of consciousness.

Why would you do that?
That's not an exclusionary statement.

It's true that consciousness arises from the physical activity of the human brain, and almost certainly other animal brains, as well.

That doesn't mean it can't arise from other types of "brains".

But if anyone is going to suggest some radically different type of consciousness, then they are going to have to explain how it is created and maintained.
__________________
.
How can you expect to be rescued if you donít put first things first and act proper?
Piggy is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 12th August 2009, 03:44 PM   #287
Paul C. Anagnostopoulos
Nap, interrupted.
 
Paul C. Anagnostopoulos's Avatar
 
Join Date: Aug 2001
Posts: 19,143
Originally Posted by Piggy
roger, thank you for post 267. An excellent post. I'll have to get to it tonight.
Agreed.

Quote:
I'll say up front that the TM you're describing is indeed qualitatively different from the pencil-pushing human.
I don't see how. One of the first large applications I wrote was an industrial-strength Turing machine simulator for use by computer science courses. Arbitrary-sized tape, save and restore, assembler for TM programs, the whole deal. I wrote it in PL/I back in the days of batch runs, where I was lucky to get two runs per day. I spent a lot of time hand-simulating the program to uncover bugs without wasting runs. My hand simulation appeared in every way to be equivalent to running the program.

~~ Paul
__________________
Millions long for immortality who do not know what to do with themselves on a rainy Sunday afternoon. ---Susan Ertz

RIP Mr. Skinny, Tim
Paul C. Anagnostopoulos is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 12th August 2009, 03:48 PM   #288
Paul C. Anagnostopoulos
Nap, interrupted.
 
Paul C. Anagnostopoulos's Avatar
 
Join Date: Aug 2001
Posts: 19,143
Originally Posted by Piggy
In short, you can't move into a plat. You can't haul a ton of freight across a river by driving over the blueprint of a bridge.
But, as you agreed before, consciousness is a process and not a thing. There is no physical freight to move.

~~ Paul
__________________
Millions long for immortality who do not know what to do with themselves on a rainy Sunday afternoon. ---Susan Ertz

RIP Mr. Skinny, Tim
Paul C. Anagnostopoulos is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 12th August 2009, 05:39 PM   #289
roger
Penultimate Amazing
 
roger's Avatar
 
Join Date: May 2002
Posts: 11,465
Originally Posted by William Parcher View Post

If you have time for an eccentric view on consciousness, try Sir Roger Penrose's The Emperor's New Mind: Concerning Computers, Minds, and the Laws of Physics. Penrose is a brilliant mathematician who tried to tackle consciousness from that starting point. A bit like a plumber doing his best to explain a computer.
I'm going to do something I'm not proud of, and slag a book I haven't read. I read the reviews of this book when it came out (such as in the NYT Review of Books, which did several pages on it), and it just didn't thrill me. Admittedly, it touches on what we are talking about here, as he proceeds from the assumption that thought is nonalgorithmic, and thus not implementable by a UTM. Problem is, he does that without a shred of evidence, and goes on to speculate on a QM brain implementation, again without any evidence.

It'd be astonishingly interesting if it turned out the brain was nonalgorithmic, but I can't bring myself to read pure speculation, especially in the face of so much evidence that neurons and the networks they form are computable.

I do like Pinker (I went to see him speak recently, btw), and I'll definitely check out Damasio, whom I've never read.
__________________
May your trails be crooked, winding, lonesome, dangerous, leading to the most amazing view. May your mountains rise into and above the clouds. - Edward Abbey

Climb the mountains and get their good tidings.
Nature's peace will flow into you as sunshine flows into trees. The winds will blow their own freshness into you, and the storms their energy, while cares will drop off like autumn leaves. - John Muir
roger is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 12th August 2009, 05:53 PM   #290
Paul C. Anagnostopoulos
Nap, interrupted.
 
Paul C. Anagnostopoulos's Avatar
 
Join Date: Aug 2001
Posts: 19,143
Originally Posted by roger
Problem is, he does that without a shred of evidence, and goes on to speculate on a QM brain implementation, again without any evidence.
Don't miss the Tegmark/Penrose/Hameroff debate:

http://space.mit.edu/home/tegmark/brain.html

Quote:
It'd be astonishingly interesting if it turned out the brain was nonalgorithmic, ...
Just an observation to no one in particular: Algorithmic does not mean constrained by logic. The brain may construe a false statement as true. [Marvin Minsky]

~~ Paul
__________________
Millions long for immortality who do not know what to do with themselves on a rainy Sunday afternoon. ---Susan Ertz

RIP Mr. Skinny, Tim
Paul C. Anagnostopoulos is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 12th August 2009, 06:14 PM   #291
William Parcher
Show me the monkey!
 
William Parcher's Avatar
 
Join Date: Jul 2005
Posts: 20,182
Roger, slag away because I agree with your review of Penrose-Emperor's New Mind. I called it eccentric, but could have just said weird and unfounded. Penrose may have had little choice other than to deal with consciousness from his own discipline. The plumber tells you how a computer works. Things did get better with his follow-up called Shadows of the Mind: A Search for the Missing Science of Consciousness. I got halfway through and just never started it up again. It sits with about 20 others that are not finished.

I had the opportunity to meet Pinker in 2000 and really like the man as a person. I think he is going to remain a primary figure in this field (cognitive neuroscience) for a long time, especially considering that he is young.
__________________
Bigfoot believers and Bigfoot skeptics are both plumb crazy. Each spends more than one minute per year thinking about Bigfoot.
William Parcher is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 12th August 2009, 06:27 PM   #292
Piggy
Unlicensed street skeptic
 
Piggy's Avatar
 
Join Date: Mar 2006
Posts: 15,905
Originally Posted by roger View Post
I'm going to do something I'm not proud of, and slag a book I haven't read. I read the reviews of this book when it came out (such as in the NYT Review of Books, which did several pages on it), and it just didn't thrill me. Admittedly, it touches on what we are talking about here, as he proceeds from the assumption that thought is nonalgorithmic, and thus not implementable by a UTM. Problem is, he does that without a shred of evidence, and goes on to speculate on a QM brain implementation, again without any evidence.

It'd be astonishingly interesting if it turned out the brain was nonalgorithmic, but I can't bring myself to read pure speculation, especially in the face of so much evidence that neurons and the networks they form are computable.

I do like Pinker (I went to see him speak recently, btw), and I'll definitely check out Damasio, whom I've never read.
You might also check out Gazzaniga's stuff.
__________________
.
How can you expect to be rescued if you donít put first things first and act proper?
Piggy is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 12th August 2009, 06:37 PM   #293
William Parcher
Show me the monkey!
 
William Parcher's Avatar
 
Join Date: Jul 2005
Posts: 20,182
Good suggestion, Piggy. Pinker and Damasio both reference Gazzaniga.
__________________
Bigfoot believers and Bigfoot skeptics are both plumb crazy. Each spends more than one minute per year thinking about Bigfoot.
William Parcher is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 12th August 2009, 06:49 PM   #294
Piggy
Unlicensed street skeptic
 
Piggy's Avatar
 
Join Date: Mar 2006
Posts: 15,905
ETA: Ignore this post: I hit submit instead of preview. I'm not done. Will submit a complete post downthread. -Piggy

roger:

Having read post 267, I can now see where the gaps are in our communication on this -- or some of them, at least.

If I had known you were making such grand assumptions about the computational theory of the mind (CTM), I would not have said that I agree with you.

I certainly agree that it is extremely useful to model the mind, and neurons, in that way, and that tremendous strides are being made with that model in what everyone must admit are our early explorations of brain function. But in no way has a broad-based CTM been proven. Not even close.

I would wager -- in fact, I would wager quite a bit, at very high odds -- that although it is a "good enough" model for current investigations, it will turn out to have significantly limited explanatory power down the road.

My field, language, is one area in particular where CTM has not yet provided as robust an explanatory framework as we might hope. (Here's a sample critique from 2006, for example.)

You said I'd get the Nobel Prize if I could prove that something in the brain is not computational, and certainly I'd get some kind of prize if I could do that. But not because I would be refuting anything that has supposedly been established. Rather, it would be because I settled an open question.

You said that my analogy with the daisies was "inapt because there is no programming controlling the swaying". What you forget is that there is no "programming" in the brain, either. Like the daisies, it is a purely physical, specifically biological, system interacting with the material world.

And recent studies into biological systems and how they behave and evolve give us reason to doubt that purely computational systems could evolve in biological specimens.

You compare neurons to transistors, but that comparison doesn't quite fit.

Biological systems that are very rigid, like transistors, are very bad at absorbing shocks. They are fragile. Biological systems tend to evolve with wiggle-room. They are fuzzy. Like the heart. It can take a good bit of knocking around before it goes into fibrillation.

In fact, it was recently discovered that a highly regular heartbeat is bad news, because highly regular heartbeats are prone to fibrillation. Some random variation in heartbeat pattern is a good thing -- it means your heart can absorb shocks and return to its natural operational state. If it gets too rigid, it can too easily get knocked into the alternate contractive pattern that will kill you.
__________________
.
How can you expect to be rescued if you donít put first things first and act proper?
Piggy is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 12th August 2009, 06:55 PM   #295
Dan O.
Banned
 
Join Date: Feb 2007
Posts: 13,594
Piggy, you completely misunderstand. When the others are talking about a pencil and paper brain, they are not saying the pencil and paper are the brain. They are saying that the pencil and paper are the engine that manifests the constructs of the universe in which the brain resides just as we think of reality as the engine that manifests matter and energy of our universe. And just as we have no way to see what our reality is beyond deducing the rules by which it operates, a brain constructed in the universe created by pencil and paper would have no way to see the pencil and paper. It would think and feel exactly as we do if the rules of its universe were the same as ours. In fact, apart from the absurdity of it, there is no experiment that we could perform to prove that we are not in the pencil and paper universe.
Dan O. is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 12th August 2009, 07:35 PM   #296
Piggy
Unlicensed street skeptic
 
Piggy's Avatar
 
Join Date: Mar 2006
Posts: 15,905
roger:

Having read post 267, I can now see where the gaps are in our communication on this -- or some of them, at least.

If I had known you were making such grand assumptions about the computational theory of the mind (CTM), I would not have said that I agree with you.

I certainly agree that it is extremely useful to model the mind, and neurons, in that way, and that tremendous strides are being made with that model in what everyone must admit are our early explorations of brain function. But in no way has a broad-based CTM been proven. Not even close.

I would wager -- in fact, I would wager quite a bit, at very high odds -- that although it is a "good enough" model for current investigations, it will turn out to have significantly limited explanatory power down the road.

My field, language, is one area in particular where CTM has not yet provided as robust an explanatory framework as we might hope. (Here's a sample critique from 2006, for example.)

You might be interested, btw, in Raymond Tallis's "Why the Mind is Not a Computer". It's rather thin, both in scope and in hard information on the brain, but it's an interesting examination (a la Dennett and Pinker -- though he would disagree with Pinker certainly on the topic of CTM) of how our own brains may have fallen victim to the associative nature of language in carrying over spurious assumptions when describing the brain in computational terms.

You said I'd get the Nobel Prize if I could prove that something in the brain is not computational, and certainly I'd get some kind of prize if I could do that. But not because I would be refuting anything that has supposedly been established. Rather, it would be because I settled an open question.

You said that my analogy with the daisies was "inapt because there is no programming controlling the swaying". What you forget is that there is no "programming" in the brain, either. Like the daisies, it is a purely physical, specifically biological, system interacting with the material world. But it is one which we know generates consciousness, whereas daisies do not.

And recent studies into biological systems and how they behave and evolve give us reason to doubt that purely computational systems could evolve in biological specimens.

You compare neurons to transistors, but that comparison doesn't quite fit.

Biological systems that are very rigid, like transistors, are very bad at absorbing shocks. They are fragile. Biological systems tend to evolve with wiggle-room. They are fuzzy. Like the heart. It can take a good bit of knocking around before it goes into fibrillation.

In fact, it was recently discovered that a highly regular heartbeat is bad news, because highly regular heartbeats are prone to fibrillation. Some random variation in heartbeat pattern is a good thing -- it means your heart can absorb shocks and return to its natural operational state. If it gets too rigid, it can too easily get knocked into the alternate contractive pattern that will kill you.

And although it is very useful to model neurons computationally, transistors they are not.

As you probably know, a simple model of a neuron consists of a synapse where the neuron picks up neurotransmitters (NTs) from adjoining neurons. When a sufficient number of NT molecules bombard the neuron, it reaches its threshold and fires, sending a signal down its length and releasing its own NTs into the next synapse. It then re-collects the NT molecules.

We can model this set-up computationally, even writing a simple program with values for n (the number of NT molecules to meet the threshold), an increment and decrement to bring the value of f (fire) from 0 to 1 and back down to 0, etc.

And that's extremely useful.

But it's an idealization.

The biological reality is messier, more fluid, more open, with all sorts of other variables around it, and there's reason to believe that it has to be that way in a real-world evolved biological system.

So we cannot be certain, and we have good reason to doubt, that neurons actually are purely computational components, even though it is useful to model them that way at this stage of our investigation of the brain.

And as we scale up to less granular levels of organization, this same kind of fuzziness persists. In its real-time operations, the brain deals with all sorts of competing associative impulses, and very often makes mistakes by accepting the incorrect one (although even here computational models have proven useful, by describing the accepted association in terms of the number of "hits" -- in other words, the more numerous the associations, the more likely it is that the brain will choose that option, even if those associations have nothing to do with the task at hand).

This is why I have serious doubts that a cog brain would work like a neural brain. Cogs are quite rigid, and not very handy with the kind of threshold-based workings that we see in the brain. Maybe there's a way to reproduce this with cogs, though, I don't know. But I'd have to see it to accept that it's possible. Maybe, but maybe not.

Can all the workings of the brain be performed by a TM? Right now, there's no reason to accept the assertion, and some very compelling evidence to make us doubt that it will turn out to be the case.

So, after all that (and ignoring the pencil-brain thing, which has turned out be a red herring and now I see why) let's look again at the speed question.

If we had a robot brain which, by whatever method, worked like a human brain -- because we have no other model to use -- what would happen to its consciousness if we slowed down the rate of data-transfer between its basic components (the equivalent of neurons)?

First, we cannot assume that this brain is some sort of TM. That would be jumping the gun.

Instead, we must assume it works like a human brain, which may be some sort of TM equivalent, but maybe (I'd say most probably) not.

Well, obviously if we slow down the rate to zero, there's no consciousness. (That's why I kept bringing that up -- not because I thought you were arguing otherwise, but as one end of a continuum.)

Somewhere between natural brain-speed and 0, then, there's a point where consciousness is not sustainable. Is it at 0?

No, it can't be. It must be higher than 0. Why? Because from cases like Marvin's, and more recent research demonstrating that we act on stimuli before we are consciously aware of them, we see that consciousness is a specialized downstream process involving the coordination of highly processed information. Because the phenomenon of conscious awareness requires coordination of coherent aggregate data, and because neural impulses are ephemeral, there must be a point higher than 0 at which coherence is insufficient to maintain conscious awareness.

Right now, no one knows what it is, but because we know that neurons fire very quickly, it's safe to assume that a rate of 1 impulse per second would be too slow.

Can we build a robot brain that would accept that rate?

I doubt it, because you'd have to accept a kind of flickering consciousness, like a movie run frame by frame.

But that wouldn't work, either, for reasons that another poster mentioned above.

Consciousness is not a point-in-time phenomenon. It's smeared out over time. There's a kind of Heisenberg effect to it, with associations being continually wrangled and forced into service. It's not like a camera lens, opening to the world and letting the light in.

So I can't see an impluse-per-second rate being high enough, even though no one can say, at this moment, what the floor would be.
__________________
.
How can you expect to be rescued if you donít put first things first and act proper?
Piggy is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 12th August 2009, 07:37 PM   #297
Piggy
Unlicensed street skeptic
 
Piggy's Avatar
 
Join Date: Mar 2006
Posts: 15,905
Originally Posted by Dan O. View Post
Piggy, you completely misunderstand. When the others are talking about a pencil and paper brain, they are not saying the pencil and paper are the brain. They are saying that the pencil and paper are the engine that manifests the constructs of the universe in which the brain resides just as we think of reality as the engine that manifests matter and energy of our universe. And just as we have no way to see what our reality is beyond deducing the rules by which it operates, a brain constructed in the universe created by pencil and paper would have no way to see the pencil and paper. It would think and feel exactly as we do if the rules of its universe were the same as ours. In fact, apart from the absurdity of it, there is no experiment that we could perform to prove that we are not in the pencil and paper universe.
No, I understand that.
__________________
.
How can you expect to be rescued if you donít put first things first and act proper?
Piggy is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 12th August 2009, 07:43 PM   #298
Piggy
Unlicensed street skeptic
 
Piggy's Avatar
 
Join Date: Mar 2006
Posts: 15,905
An experiment on linguistic perception related to the role of data coherence in conscious experience.

Quote:
It seems to be the convergence of these measures in a late time window (after 300 ms), rather than the mere presence of any single one of them, which best characterizes conscious trials. "The present work suggests that, rather than hoping for a putative unique marker – the neural correlate of consciousness – a more mature view of conscious processing should consider that it relates to a brain-scale distributed pattern of coherent brain activation," explained neuroscientist Lionel Naccache, one of the authors of the paper.

The late ignition of a state of long distance coherence demonstrated here during conscious access is in line with the Global Workspace Theory, proposed by Stanislas Dehaene, Jean-Pierre Changeux, and Lionel Naccache.
__________________
.
How can you expect to be rescued if you donít put first things first and act proper?

Last edited by Piggy; 12th August 2009 at 09:32 PM.
Piggy is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 12th August 2009, 07:52 PM   #299
Piggy
Unlicensed street skeptic
 
Piggy's Avatar
 
Join Date: Mar 2006
Posts: 15,905
Here are a couple of articles that explore some very advanced uses of computational models with great success.

If more advances are made in this direction, the CTM may indeed win the day.

Neural Networks Help Unravel Complexity Of Self-awareness


Two Brains, One Thought: Wiring Diagrams Of A Neuronal Network Based On Its Dynamics

But if a version of CTM proves accurate, consciousness is still a very high-level function, and neural impulses (the building blocks of all high-level functions) are still ephemeral. Therefore, I don't see how we can get around the conclusion that there must be a floor for impulse speed below which consciousness is unsustainable.
__________________
.
How can you expect to be rescued if you donít put first things first and act proper?
Piggy is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 12th August 2009, 09:06 PM   #300
Piggy
Unlicensed street skeptic
 
Piggy's Avatar
 
Join Date: Mar 2006
Posts: 15,905
Regarding "flickering" consciousness:

We actually have real-world examples of flickering consciousness.

When the brain is tired, it will take micronaps.

You might have experienced these while driving. You jerk awake and you're a few yards down the road. Probably, you get a huge rush of adrenaline.

It can happen at your desk, or just about anywhere.

From your POV, you're not aware of the gaps. Your awareness seems continuous; you just have these lost moments of time.

So obviously, it's possible for awareness to "flicker" to a certain extent.

The question for this thread would be how short the conscious spans can be and how long the gaps can be.

But that said, the brain is still running at speed during all of this, so it's not necessarily correlative.
__________________
.
How can you expect to be rescued if you donít put first things first and act proper?
Piggy is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 12th August 2009, 10:02 PM   #301
steenkh
Philosopher
 
steenkh's Avatar
 
Join Date: Aug 2002
Location: Denmark
Posts: 5,292
Originally Posted by Piggy View Post
Thanks for the links.

Quote:
But if a version of CTM proves accurate, consciousness is still a very high-level function, and neural impulses (the building blocks of all high-level functions) are still ephemeral. Therefore, I don't see how we can get around the conclusion that there must be a floor for impulse speed below which consciousness is unsustainable.
If neurons are working to a set of rules that we can figure out, it should also be possible construct machine that can carry out these rules, be it in silicon or pencil/paper. Such a machine would not be bound by the biological/physical constraints on the brain, and therefore computation could be slowed down indefinitely. The machine would be conscious because it is directly equivalent to a conscious brain, but the slow speed of computation and the huge data involved would make it unlikely that we would ever recognise it as conscious.

Please notice that a CTM would not be necessary for such a machine to work, provided that every neuron of a real brain is included, and that every function of a real neuron is simulated.

However, in the real world, such a simulation is impossible, and a CTM is necessary for us to understand what is going on in a brain, and also in order to cut down the size of the simulation.
__________________
Steen

--
Jack of all trades - master of none!
steenkh is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 12th August 2009, 10:26 PM   #302
Piggy
Unlicensed street skeptic
 
Piggy's Avatar
 
Join Date: Mar 2006
Posts: 15,905
Originally Posted by steenkh View Post
If neurons are working to a set of rules that we can figure out, it should also be possible construct machine that can carry out these rules, be it in silicon or pencil/paper. Such a machine would not be bound by the biological/physical constraints on the brain, and therefore computation could be slowed down indefinitely. The machine would be conscious because it is directly equivalent to a conscious brain, but the slow speed of computation and the huge data involved would make it unlikely that we would ever recognise it as conscious.
Well, regardless of an external observer's ability to recognize it as conscious, what would be this entity's experience?

Originally Posted by steenkh View Post
However, in the real world, such a simulation is impossible
What other world are we talking about?
__________________
.
How can you expect to be rescued if you donít put first things first and act proper?
Piggy is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 13th August 2009, 02:25 AM   #303
steenkh
Philosopher
 
steenkh's Avatar
 
Join Date: Aug 2002
Location: Denmark
Posts: 5,292
Originally Posted by Piggy View Post
Well, regardless of an external observer's ability to recognize it as conscious, what would be this entity's experience?
How do you describe the experience of consciousness?

Quote:
What other world are we talking about?
There is this theoretical world where billions of neurons can be simulated by pencil and paper ...
__________________
Steen

--
Jack of all trades - master of none!
steenkh is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 13th August 2009, 04:16 AM   #304
shuttlt
Philosopher
 
Join Date: Aug 2008
Posts: 5,628
Originally Posted by Piggy View Post
Well, regardless of an external observer's ability to recognize it as conscious, what would be this entity's experience?
Piggy, in my naive way, that is exactly what I'd like to know. I'll go back to lurking now.
shuttlt is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 13th August 2009, 05:24 AM   #305
Paul C. Anagnostopoulos
Nap, interrupted.
 
Paul C. Anagnostopoulos's Avatar
 
Join Date: Aug 2001
Posts: 19,143
Originally Posted by Piggy
First, we cannot assume that this brain is some sort of TM. That would be jumping the gun.
Agreed, although I'm not sure what more it could do. Heck, even a nondeterministic Turing machine is no more powerful than a TM.

Quote:
Instead, we must assume it works like a human brain, which may be some sort of TM equivalent, but maybe (I'd say most probably) not.

Well, obviously if we slow down the rate to zero, there's no consciousness. (That's why I kept bringing that up -- not because I thought you were arguing otherwise, but as one end of a continuum.)

Somewhere between natural brain-speed and 0, then, there's a point where consciousness is not sustainable. Is it at 0?

No, it can't be. It must be higher than 0. Why? Because from cases like Marvin's, and more recent research demonstrating that we act on stimuli before we are consciously aware of them, we see that consciousness is a specialized downstream process involving the coordination of highly processed information. Because the phenomenon of conscious awareness requires coordination of coherent aggregate data, and because neural impulses are ephemeral, there must be a point higher than 0 at which coherence is insufficient to maintain conscious awareness.
The impulses are ephemeral because the brain uses chemistry. But is this ephemeral nature required? Perhaps a less ephemeral substrate, such as electronics, could operate arbitrarily slowly.

Thanks for the interesting links.

~~ Paul
__________________
Millions long for immortality who do not know what to do with themselves on a rainy Sunday afternoon. ---Susan Ertz

RIP Mr. Skinny, Tim
Paul C. Anagnostopoulos is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 13th August 2009, 06:38 AM   #306
Piggy
Unlicensed street skeptic
 
Piggy's Avatar
 
Join Date: Mar 2006
Posts: 15,905
Originally Posted by Paul C. Anagnostopoulos View Post
The impulses are ephemeral because the brain uses chemistry. But is this ephemeral nature required? Perhaps a less ephemeral substrate, such as electronics, could operate arbitrarily slowly.
I wonder if you could build a brain that used non-ephemeral tokens, so that signals worked something like objects on an assembly line, and when enough tokens had arrived the crew then assembled them.

That's a really tough question b/c then you get into the issue of how long the "flicker" has to be, and how much and what kind of pre-processed data is necessary for a minimum-length moment of conscious awareness to occur.
__________________
.
How can you expect to be rescued if you donít put first things first and act proper?
Piggy is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 13th August 2009, 06:58 AM   #307
Philosaur
Muse
 
Join Date: Mar 2009
Posts: 971
I think that this debate rests in part on whether you think human-type consciousness (there may or may not be other types) is realizable on non-biological hardware. It could turn out to be that a human brain is essential for a humanoid mind--which is equivalent to saying that the mind is not a Turing machine.

There is another aspect to the debate which seems to be about what the subjective experience of such an alternately-realized mind would be like, or even whether or not it would have subjective experience (I use this term rather than 'consciousness' because it's slightly more precise).

I believe that subjective experience occurs wherever you find feedback loops. Where you have fantastically complex feedback loops like in the human mind, you have a rich and colorful subjective experience. Wherever you have simple feedback loops--like pointing a video camera at its monitor, you have a correspondingly simple subjective experience.

If this story turns out to be the case, then it seems that the pencil-and-paper-and-human mind would experience whatever the human encoded as sensory input, and at whatever speed the human was able to enter it. Animals (and plants, to a degree), experience the world at all kinds of time-scales. The extremely slow speed might be similar to how a redwood tree would experience the world. It might be like a gnat trying to imagine what the world looks like to a relatively slow-moving human.

Last edited by Philosaur; 13th August 2009 at 07:03 AM.
Philosaur is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 13th August 2009, 07:00 AM   #308
Piggy
Unlicensed street skeptic
 
Piggy's Avatar
 
Join Date: Mar 2006
Posts: 15,905
Originally Posted by steenkh View Post
There is this theoretical world where billions of neurons can be simulated by pencil and paper ...
I'm not sure I really want to dive into the "pencil brain" thought experiment again, but if we do, I'd have to ask for a complete description of the hypothetical set-up.

Also, if we're talking about simulating billions of neurons, what sort of simulation do we mean?

Are we talking about a virtual or actual simulation?

For example, let's say I want to simulate the impact of an object on another object. For instance, testing the theory that dislodged foam could have damaged the space shuttle's protective tiles. Can't afford to actually launch a shuttle and intentionally dislodge some foam, so I have to simulate.

I can do it virtually -- that is, run a computer simulation wherein I create a virtual world with virtual objects that have all the right virtual properties and send my virtual foam chunk hurling into the virtual tile at the right virtual speed to see what happens.

Or I can do an actual simulation, wherein I take some foam and cut it to the right size, take a section of tile and set it up good and steady, then simulate the event by somehow shooting the foam toward the tile at the right speed from some sort of cannon.

In the latter case, if it works, I end up with a busted tile. It's a simulation of an event, but the simulation essentially replays the event in reality, thereby reproducing it. The event happens again.

In the former case, there is no busted tile, and there is no impact event. There are computer parts moving and electrons being excited and heat being produced, etc., and all that happens in such a way as to remind me of an actual event.

If the only observer to the actual simulation were a dog, the event would still have happened. There would still be one more busted tile in the universe.

If the only observer to the virtual simulation were a dog, then there wouldn't even be any simulation, when you get right down to it, just the physical and electronic actions of the machine.

That's why I say roger is wrong to say that I'm somehow introducing dualism into the pencil scenario (as I understand it -- perhaps I was incorrect about the setup he proposed). As I envisioned that scenario, it was inherently dualistic. You actually do have the equivalent of a homunculus, a ghost in the machine -- that is, the man behind the pencil. In the human brain, or in a circuit brain, there is no homunculus.

So if we get into the pencil brain thing, I think it would help to have a complete description of what's going on, and clarity regarding whether this is supposed to be an actual or virtual simulation.
__________________
.
How can you expect to be rescued if you donít put first things first and act proper?
Piggy is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 13th August 2009, 07:03 AM   #309
Philosaur
Muse
 
Join Date: Mar 2009
Posts: 971
Reading back over this, I noticed it's a little disjointed. I'm at work trying to get my thoughts out relatively quickly, so there's that.

Also, I know my idea about subjective experience might seem to do violence to our everyday conceptions of what what it is, but since SE is by nature not accessible to outsiders, I don't think it's much of a problem. I have a decent idea what it's like to be you, less of an idea what it's like to be a bat, even less for a worm... Who's to say that there is no "what it's like" to be the stock market, or a video camera-and-monitor, or a heater-and-thermostat.
Philosaur is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 13th August 2009, 07:20 AM   #310
Piggy
Unlicensed street skeptic
 
Piggy's Avatar
 
Join Date: Mar 2006
Posts: 15,905
Originally Posted by steenkh View Post
How do you describe the experience of consciousness?
Attempting that can sometimes lead into unnecessary distractions.

I can give examples that might serve the purpose better.

There are some interesting ones that illustrate your earlier point about consciousness being temporally dislocated.

Most of us have had the experience of lazily skimming a book or an article, maybe thinking about what we need to do in the yard that afternoon, then suddenly we realize, "Hey, wait a minute... did he just say....?" We realize, after the fact, that a second or two ago we read something startling or bizarre.

We go back up the page and sure enough, that's what the author wrote.

In our distracted state, our brain was processing the words, but it wasn't moving some of the results of that processing into the areas of the brain that focus our conscious awareness onto them because that module was busy pondering whether to try to fix the bird feeder or just tear the damn thing down and start all over again.

But in the course of that processing -- which involves multiple association tasks -- certain associations were made that caused the brain to flag a particular phrase as more important than the bird feeder, so it was routed into the "be aware of this" module, bumping out the bird feeder.

There's an instance of non-conscious (or co-conscious) processing v. conscious processing.

Another familiar scenario is being at a conference event or a party or some such where lots of conversations are going on. Your mind hears everything in earshot, but you're only consciously aware of the conversation you're involved in.

But the mind is doing triage all the time. If you suddenly remember that you forgot to call your wife half an hour ago like you promised, for a moment that thought will occupy your conscious processing space (as I said, CTM models can be very useful) and you'll tune out the conversation. Chances are, you'll have to say, "Oh, I'm sorry, I just remembered something, I have to call my wife right now," and you will have "missed" the last thing the other person said to you, even though you could not have avoided "hearing" it.

Something similar happens when we hear our name spoken within earshot. It jumps out from the background noise of surrounding conversations and music there at our conference or party. Our brain flags it as important and pushes it into the brain modules that handle conscious awareness.

If it turns out to be the voice of someone you don't know talking about something irrelevant, it's just a blip and you continue your conversation essentially uninterrupted. If it's your wife, or your boss, and she's talking to or about you, you're likely to tune out the conversation for a moment, then have to refocus and say something like "I'm sorry, I couldn't hear you there" to get the other person to repeat what they just said.

So consciousness is the experience of being "aware" of something, whether it's an event happening now, or an idea, or even an event that has recently ended.
__________________
.
How can you expect to be rescued if you donít put first things first and act proper?
Piggy is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 13th August 2009, 07:23 AM   #311
shuttlt
Philosopher
 
Join Date: Aug 2008
Posts: 5,628
Originally Posted by Philosaur View Post
Also, I know my idea about subjective experience might seem to do violence to our everyday conceptions of what what it is, but since SE is by nature not accessible to outsiders, I don't think it's much of a problem. I have a decent idea what it's like to be you, less of an idea what it's like to be a bat, even less for a worm... Who's to say that there is no "what it's like" to be the stock market, or a video camera-and-monitor, or a heater-and-thermostat.
And I thought I was the only non-p-zombie!!!
shuttlt is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 13th August 2009, 07:30 AM   #312
Piggy
Unlicensed street skeptic
 
Piggy's Avatar
 
Join Date: Mar 2006
Posts: 15,905
Another interesting case that illustrates conscious v. non-conscious....

I wish I could recall the race, driver, and track, but I can't.

The case involved a driver who avoided smashing into a wreck. By all accounts, he should have smashed into it, since it was on the other side of a banked curve and he was going at a speed that would have made a collision unavoidable had he reacted only when he could see it.

But he slowed down before he could see the wreck.

It was a fascinating case of "instinct". The driver said he had no idea why he slowed down -- there was no smoke, and he hadn't heard the crash.

But on further investigation, it turned out not to be instinct at all. While watching car-cam tapes on replay, the driver spotted the clue. It was the spectators.

Normally, they're looking toward the oncoming drivers and cheering.

On the tape, he could see that they were all looking away and no one was cheering.

In real time on the track, his brain had picked up that clue as he approached the curve and had slapped an enormous red flag on it saying "SLOW DOWN NOW!" Because of the importance of that message, his awareness of the spectators was pushed back out of his "be aware of this" module so quickly that it became effectively subliminal, and his urge to slow down "felt" instinctive.

But upon reviewing the video, he instantly spotted the clue that had activated his "instinct" and recalled what it was that had tipped him off.
__________________
.
How can you expect to be rescued if you donít put first things first and act proper?
Piggy is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 13th August 2009, 07:42 AM   #313
Philosaur
Muse
 
Join Date: Mar 2009
Posts: 971
It's Clever Hans, the race car driver.

That same principle--getting information, but being unaware of the vector that brought it, is most likely at work in all supposed cases of ESP--at least the ones where the proponent or 'psychic' really believes in the powers. The vector is inevitably mundane, but surprising, nonetheless.

Your example brings to mind a notion in epistemology that says that knowledge is justified true belief. In the driver's case, he had a belief, it was true, but did he have justification for it? Many would say not until he realized how he acquired the true belief. But that's a tangent conversation...
Philosaur is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 13th August 2009, 07:45 AM   #314
roger
Penultimate Amazing
 
roger's Avatar
 
Join Date: May 2002
Posts: 11,465
Originally Posted by Piggy View Post

I certainly agree that it is extremely useful to model the mind, and neurons, in that way, and that tremendous strides are being made with that model in what everyone must admit are our early explorations of brain function. But in no way has a broad-based CTM been proven. Not even close.
I'm running out of time to keep up with this thread. So, I'll perhaps unfairly only respond to a bit. However, it is the crux.

Okay, certainly it has not been proven, but it follows from everything we know about physics. Yes, physics. Physics is, as far as we know, computational. Certainly QM is - our predictions and calculations have reached a level of precision that we have never achieved in any other field.

From physics you get to chemistry. Again, chemistry is computational, so far as we can tell. We conclude this in two different ways. First, we observe that we can compute everything that we have seen so far. Second, reductionism. Chemistry devolves to physics, or QM. Put another way, QM in a macro environment is described as chemistry. And, as we know from Turing, any combination of computable elements is also computable.


Quote:
My field, language, is one area in particular where CTM has not yet provided as robust an explanatory framework as we might hope. (Here's a sample critique from 2006, for example.)
"Explanatory" - I don't want to be one of those people who grasp a word out of context, but I think you probably chose this word well.

QM is not a good explanatory model of chemistry. No one uses QM to do chemistry, except in certain circumstances. There are far better models.

Yet, there is no doubt that chemistry is merely the sum behavior of QM.

Just because we can't right now come up with an easy computational model for language in no way means that language is not computational.

This is where my assertions of dualism comes in. You are saying the brain is chemicals and networks, both of which we have extraordinary evidence are computable, and then you say the sum of the parts is not computable. It just doesn't follow without a dualist element.


Quote:
You said that my analogy with the daisies was "inapt because there is no programming controlling the swaying". What you forget is that there is no "programming" in the brain, either.
Piggy, here you go again, making assertions about a field you know little about. The network of neurons and the information stored in the neurons is the programming. It's a very basic tenet of information theory. Daises are so bad an analogy to a computational brain that I'm astonished that you are suggesting that it is in any way a rebuttal to what I am saying.

Quote:
And recent studies into biological systems and how they behave and evolve give us reason to doubt that purely computational systems could evolve in biological specimens.
You'll have to cite those.


Quote:
Biological systems that are very rigid, like transistors, are very bad at absorbing shocks. They are fragile. Biological systems tend to evolve with wiggle-room. They are fuzzy. Like the heart. It can take a good bit of knocking around before it goes into fibrillation...And although it is very useful to model neurons computationally, transistors they are not.
Once again you don't understand computable, and you seize on irrelevant aspects. Computable does not mean deterministic, it does not mean rigid, it does not mean an inability to handle fuzziness. And certainly physical robustness has nothing to do with it. Finally, if neurons are computable, they are computable. We are talking about equivalence, not identity.



As you probably know, a simple model of a neuron consists of a synapse where the neuron picks up neurotransmitters (NTs) from adjoining neurons. When a sufficient number of NT molecules bombard the neuron, it reaches its threshold and fires, sending a signal down its length and releasing its own NTs into the next synapse. It then re-collects the NT molecules.

Quote:
We can model this set-up computationally, even writing a simple program with values for n (the number of NT molecules to meet the threshold), an increment and decrement to bring the value of f (fire) from 0 to 1 and back down to 0, etc.

And that's extremely useful.

But it's an idealization.

The biological reality is messier, more fluid, more open, with all sorts of other variables around it, and there's reason to believe that it has to be that way in a real-world evolved biological system.
Still computable.

I wonder - I read a book that I now forget, by a prominent language theorist, using arguments much like this. What a terrible book, because he understood nothing about computability. I wonder if you have been influenced either by him or the field in general. Because you have said nothing that is not computable. "Messy" "fluid" "open" are all ill-defined words in the space of information theory. More importantly, nothing you are describing is uncomputable. The book talked about things like Excel, and how it was exact, created the same result every time, and imagine if your taxes were computed differently each time. Sure, but only because the algorithms chosen were for computing taxes. What a misunderstanding of computing - the same misunderstanding you are showing.

Quote:
So we cannot be certain, and we have good reason to doubt, that neurons actually are purely computational components, even though it is useful to model them that way at this stage of our investigation of the brain.
Only if you don't understand information theory.

Quote:
And as we scale up to less granular levels of organization, this same kind of fuzziness persists. In its real-time operations, the brain deals with all sorts of competing associative impulses, and very often makes mistakes by accepting the incorrect one (although even here computational models have proven useful, by describing the accepted association in terms of the number of "hits" -- in other words, the more numerous the associations, the more likely it is that the brain will choose that option, even if those associations have nothing to do with the task at hand).
Associations and mistakes are computable. Trivially so.

Quote:
This is why I have serious doubts that a cog brain would work like a neural brain. Cogs are quite rigid, and not very handy with the kind of threshold-based workings that we see in the brain. Maybe there's a way to reproduce this with cogs, though, I don't know. But I'd have to see it to accept that it's possible. Maybe, but maybe not.
There we go with rigid Excel! Computing is not rigid, except by choice.

Quote:
Can all the workings of the brain be performed by a TM? Right now, there's no reason to accept the assertion, and some very compelling evidence to make us doubt that it will turn out to be the case.
No, you are presenting personal increduality based on a lack of understanding of a field as 'reason'.

Everything is phyics. Physics is computable. Every combination of computable is computable. Without a form of dualism, brains have to be computable. Information science 101 and physics 101.

Quote:
So, after all that (and ignoring the pencil-brain thing, which has turned out be a red herring and now I see why) let's look again at the speed question.
I'm completely uninterested in the speed question, especially when discussed with such a basic misunderstanding of physics and computation.
__________________
May your trails be crooked, winding, lonesome, dangerous, leading to the most amazing view. May your mountains rise into and above the clouds. - Edward Abbey

Climb the mountains and get their good tidings.
Nature's peace will flow into you as sunshine flows into trees. The winds will blow their own freshness into you, and the storms their energy, while cares will drop off like autumn leaves. - John Muir
roger is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 13th August 2009, 08:00 AM   #315
roger
Penultimate Amazing
 
roger's Avatar
 
Join Date: May 2002
Posts: 11,465
Piggy, as an aside, i think there is a lot of misunderstanding going on because Paul, I and others are referring to concepts that we are very familiar with, and that many books have been written about. When Paul or I say "pencil brain" we know and understand the 100 implications we both mean by that. When Paul says slow the brain down, we understand that we are not talking about the implementation domain, where for a specific implementation you cannot run slower than the impulse speed and duration of the signal. After all, what a boring question to ask - for any given substrate of course there is a speed to slow and a speed too fast. Can you imagine starting a JREF post - can I rev my engine too fast? Well, yes! Duh! Or "Can I run my car engine at 0.000001 rpm" - NO! But we could make an engine to do that, if we wanted.

So from my perspective you are arguing extraordinarily strange things - like comparing a pencil brain to a fart. But then I have at least 10 books by leaders in the field under my belt on this one topic alone, and dozens more on computational theory and the like. I guess I can see where you are coming from if you don't recognize the referents, but on the other hand, recognize we are talking in professional shorthand. Pencil brain for us is a UTM. It's a useful thought experiment because it challenges preconceptions - "how could a pencil think" type feelings. Of course, we aren't saying the pencil thinks, but the system produced by the pencil. It gets right to the crux of the matter.

A physicist might say "acceleration times time is velocity" - in that statement is the assumption that we aren't at relativistic speeds, that we are dealing with macro objects where Heisenberg effects are below our measurement accuracy, all kinds of things that don't need to be explained. A literalist JREFer, fresh from reading a bit of Einstein for the first time, hopping into the conversation, would be sputtering "but relativity states....", etc.

There are genuine misunderstandings about computation in this thread as well, but a lot of the argument is of this nature.
__________________
May your trails be crooked, winding, lonesome, dangerous, leading to the most amazing view. May your mountains rise into and above the clouds. - Edward Abbey

Climb the mountains and get their good tidings.
Nature's peace will flow into you as sunshine flows into trees. The winds will blow their own freshness into you, and the storms their energy, while cares will drop off like autumn leaves. - John Muir
roger is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 13th August 2009, 08:11 AM   #316
William Parcher
Show me the monkey!
 
William Parcher's Avatar
 
Join Date: Jul 2005
Posts: 20,182
Would the robot be equipped with emotions so that it can make proper decisions?
__________________
Bigfoot believers and Bigfoot skeptics are both plumb crazy. Each spends more than one minute per year thinking about Bigfoot.
William Parcher is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 13th August 2009, 08:19 AM   #317
roger
Penultimate Amazing
 
roger's Avatar
 
Join Date: May 2002
Posts: 11,465
Piggy, one more thing.

Let's forget about "pencil brain", since it is introducing so many spurious assumptions. Rest assured that when Paul or I say pencil brain all we mean is a form of a computer that is functionally identical to any computer you can think of. Ditto "cog brain". This is based on Turing's work, which is probably one of most important pieces of mathematics done in the 20th century, and extremely well vetted. If you can think of an objection, rest assured you are misunderstanding something.

So, instead of pencil brain, assume we were always talking about a 'big blue' level of computer on steroids. 10^20 processors running in parallel, with a clock speed 1 trillion times as fast as the current clock speed of big blue, with 10^32 words of memory available to each processor, with a complex, adaptable network connecting the processors that can be completely reconfigured in 1 clock cycle. Heck, assume asynchronous clock timing (each processor running on it's own clock) if you want, which makes some computations easier, some harder. Each processor has 32 cores in it, all running in parallel. Equip every processor with a true random number generator. All this in 1 square inch Etc. I assure you what we meant by 'pencil brain' can do everything and anything this super-big-blue can do, except of course slower. Slap that pencil brain in a relativistic capsule, and it'll keep up with big blue on any computation possible.

Next, assume that super-big-blue is in a robot body, connected to senses as complex as you like. Vision, tactile, heat sensors, whatever.

No homoculus in the machine, no human interpreting results, no "virtual" tiles being broken. If super-big-blue gets inputs about a tile, it came though that vision and tactile system, the actions it does goes back through the robot's body, and the tile "really" gets broken. (it really doesn't matter if this is simulated or real, but since you are hung up on that point, assume a real robot body)

This is what we have been talking about all along, in the shorthand of 'pencil brain'. But since that is a sticking point for you, try super-big-blue-robot instead without worrying about Turing's math, or 'virtual' vs physical.
__________________
May your trails be crooked, winding, lonesome, dangerous, leading to the most amazing view. May your mountains rise into and above the clouds. - Edward Abbey

Climb the mountains and get their good tidings.
Nature's peace will flow into you as sunshine flows into trees. The winds will blow their own freshness into you, and the storms their energy, while cares will drop off like autumn leaves. - John Muir

Last edited by roger; 13th August 2009 at 08:29 AM.
roger is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 13th August 2009, 08:40 AM   #318
rocketdodger
Philosopher
 
rocketdodger's Avatar
 
Join Date: Jun 2005
Posts: 6,946
Originally Posted by roger View Post
I'm going to do something I'm not proud of, and slag a book I haven't read. I read the reviews of this book when it came out (such as in the NYT Review of Books, which did several pages on it), and it just didn't thrill me. Admittedly, it touches on what we are talking about here, as he proceeds from the assumption that thought is nonalgorithmic, and thus not implementable by a UTM. Problem is, he does that without a shred of evidence, and goes on to speculate on a QM brain implementation, again without any evidence.

It'd be astonishingly interesting if it turned out the brain was nonalgorithmic, but I can't bring myself to read pure speculation, especially in the face of so much evidence that neurons and the networks they form are computable.

I do like Pinker (I went to see him speak recently, btw), and I'll definitely check out Damasio, whom I've never read.

I haven't read it either, but I do a fair bit of slagging myself. In particular, you can find a good number of professional criticisms of the logic he uses to "show" that human consciousness isn't turing equivalent. I can explain it myself but I am sure you will be able to find some yourself if you are interested -- just google "lucas-penrose criticism" lol.

EDIT: You will get more hits if you goodle "lucas penrose FALLACY" instead, lol. That should tell you something.

Last edited by rocketdodger; 13th August 2009 at 08:44 AM.
rocketdodger is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 13th August 2009, 08:42 AM   #319
steenkh
Philosopher
 
steenkh's Avatar
 
Join Date: Aug 2002
Location: Denmark
Posts: 5,292
Originally Posted by Piggy View Post
I'm not sure I really want to dive into the "pencil brain" thought experiment again, but if we do, I'd have to ask for a complete description of the hypothetical set-up.
We were talking about a hypothetical world, right? In this world, it is actually possible to use billions of pencils and paper with one for each neuron, and each neuron is described in terms of when it fires, based on what input, and what output the firing results in.

Any brain input will have to be simulated too, as will any brain output. The simultaneous firing of some neurons can also be simulated.

This huge paper machine will in principle be able simulate a brain complete with consciousness, but obviously each simulated millisecond will take a few centuries to finish in real time.

After some millions of years, enough simulation will have been performed for the paper machine to have experienced consciousness, but because this consciousness consists of apparently random firing patterns of neurons, nobody will be able to recognise it.

If anybody wanted to have a conversation with this paper machine, they have to input the simulated aural signals into the appropriate neurons, and wait for some billions of years before the simulated neurons that govern the simulated speech system fires in the patterns that a normal brain would do to make speech.

Quote:
Also, if we're talking about simulating billions of neurons, what sort of simulation do we mean?

Are we talking about a virtual or actual simulation?
I can't believe you are asking this question - or I may have no idea what you are thinking about! Practically anything you do on a computer is a virtual simulation. A pencil/paper simulation could never be an actual simulation of anything that does not involve pencils and paper!

Quote:
If the only observer to the actual simulation were a dog, the event would still have happened. There would still be one more busted tile in the universe.

If the only observer to the virtual simulation were a dog, then there wouldn't even be any simulation, when you get right down to it, just the physical and electronic actions of the machine.
Exactly. So why did you ask?

Quote:
That's why I say roger is wrong to say that I'm somehow introducing dualism into the pencil scenario (as I understand it -- perhaps I was incorrect about the setup he proposed). As I envisioned that scenario, it was inherently dualistic. You actually do have the equivalent of a homunculus, a ghost in the machine -- that is, the man behind the pencil. In the human brain, or in a circuit brain, there is no homunculus.
It is essential for the simulation to be able to simulate every single element that is part of the consciousness in a real brain. Neurons are fairly simple as far as we know, and they are governed by simple rules, which is excellent for simulations. However, as long as we do not know for sure how exactly to achieve consciousness, we also cannot be sure that we have got all elements right. For this theoretical paper machine to be certain to work, all neurons will have to be simulated, and if neurons have more sophisticated functions than we know today, these would have to be simulated too. If there are other cells that have a function in consciousness, these too will have to be simulated.

Once we know exactly how to achieve consciousness, ie, we have a working CTM, then we may be able to reduce the number of elements, both in types and quantity, and this is what is the goal for CTM, because obviously, a paper simulation, or even a super fast complete computer simulation of a brain is too impractical for us at this stage.

Quote:
So if we get into the pencil brain thing, I think it would help to have a complete description of what's going on, and clarity regarding whether this is supposed to be an actual or virtual simulation.
We do not have to discuss the pencil brain thing. It really does not matter on what hardware the simulation runs. The important point is really that we are of course talking about a virtual simulation, and all attempts at simulating consciousness or intelligence have been virtual. One day we could probably do actual simulations on biologically simulated brains, but I think that virtual simulations are much easier to implement.

It is essential for our current understanding of consciousness that all elements that are part of consciousness are working according to a fixed set of rules. That means that the brain is a Turing Machine. If it turns out that neurons can change states in unpredictable ways, we might not be able to simulate a brain with paper machines or any other machine.

Interestingly, our brain is in fact changing states unpredictably because of damage from chemicals and cosmic waves, and some of these states that may have been caused randomly in this way could theoretically lead to new insights for that brain. However, as our present understanding goes, consciousness is not dependent on random influence and damage.
__________________
Steen

--
Jack of all trades - master of none!
steenkh is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 13th August 2009, 08:43 AM   #320
rocketdodger
Philosopher
 
rocketdodger's Avatar
 
Join Date: Jun 2005
Posts: 6,946
Originally Posted by Zeuzzz View Post
Yeah and I use a calulator to do things I can't figure out on my own. And guess what? The calculator was programmed only by someones conscious input. As is every program.

I cant see what difference a global search heuristic program has to any other program. The computers are doing what we consciously tell them to. Nothing more, nothing less.
You are doing what your DNA tells you to do. Nothing more, nothing less.

So... what was your point, again?
rocketdodger is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Reply

International Skeptics Forum » General Topics » Science, Mathematics, Medicine, and Technology

Bookmarks

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump


All times are GMT -7. The time now is 01:39 AM.
Powered by vBulletin. Copyright ©2000 - 2018, Jelsoft Enterprises Ltd.

This forum began as part of the James Randi Education Foundation (JREF). However, the forum now exists as
an independent entity with no affiliation with or endorsement by the JREF, including the section in reference to "JREF" topics.

Disclaimer: Messages posted in the Forum are solely the opinion of their authors.