|
Welcome to the International Skeptics Forum, where we discuss skepticism, critical thinking, the paranormal and science in a friendly but lively way. You are currently viewing the forum as a guest, which means you are missing out on discussing matters that are of interest to you. Please consider registering so you can gain full use of the forum features and interact with other Members. Registration is simple, fast and free! Click here to register today. |
Tags | artificial intelligence , consciousness |
View Poll Results: Is consciousness physical or metaphysical? |
Consciousness is a kind of data processing and the brain is a machine that can be replicated in other substrates, such as general purpose computers. | 81 | 86.17% | |
Consciousness requires a second substance outside the physical material world, currently undetectable by scientific instruments | 3 | 3.19% | |
On Planet X, unconscious biological beings have perfected conscious machines | 10 | 10.64% | |
Voters: 94. You may not vote on this poll |
13th May 2012, 09:23 PM | #361 |
Philosopher
Join Date: Jun 2005
Posts: 6,946
|
You seem to miss the distinction between exhibiting a conscious behavior, and being conscious.
The distinction being that an entity is "conscious" when it displays a set of "conscious behaviors" that is larger than some threshold. For example, do you think that the ability to write a poem means an entity is conscious? Does lacking that ability make an entity not conscious? What about the ability to react to visual stimuli? Are blind people not conscious? When I say a program demonstrates fundamentally conscious behaviors, it means that those behaviors are part of the set of behaviors we humans consider fundamental things that a conscious entity might do. In particular, the ability to imagine things. But it would be an error to think that "imagination" is limited to how you think of imagination, because you haven't taken the time to formalize it into something that can be discussed logically. These researchers have, and they did a very good job of it, and their robot imagines things, according to any formal definition of the term "imagine" that anyone has come up with. |
13th May 2012, 09:30 PM | #362 |
Philosopher
Join Date: Jun 2005
Posts: 6,946
|
To be specific, the program was interfacing with a robot, and it determined that it should rotate the arm a certain way by imagining what the result would be if it moved a certain way. When it imagined a movement that led to a goal condition, the movement stopped being an imagined one and became a real one.
Kind of like if you were looking at a telescope and you saw the girl across the street enter her apartment, and you remember that her bedroom is to the right, your decision to rotate the telescope in that direction is based on you imagining what will take place if you rotate that telescope to the right, and you liking what result that might lead to. |
15th May 2012, 02:23 AM | #363 |
Philosopher
Join Date: Jul 2010
Posts: 5,295
|
|
15th May 2012, 02:27 AM | #364 |
Philosopher
Join Date: Jul 2010
Posts: 5,295
|
|
15th May 2012, 08:38 AM | #365 |
Under the Amazing One's Wing
Join Date: Nov 2005
Posts: 2,546
|
Someone suggested a thermostat had a tiny morsel of consciousness. I really think memory is an essential part of consciousness, so I'd argue against this, but there was a function in a game I made that is easier to argue it qualified.
Start with the assumption that consciousness is not an either-or attribute, but a degree, from zero, to tiny, to human (and presumably beyond human, perhaps infinitely). My function was designed to get a computer-controlled player unstuck (an counter-sphexish function). It worked beautifully and looked like it was being controlled by a conscious player. Here's the algorithm: 1 - Remember the last place you've been before this place. 2 - Remember the last direction you were moving before you were moving in the current direction. 3 - If you've been in the same place for more than 3 seconds, start moving in direction you were before the current direction, move forward for one second, then resume normal movement. It both seems to fulfill some definitions of consciousness, and looks very conscious to the eye. I know it doesn't write poetry, but is there a clear argument that it's not a tiny bit conscious? |
16th May 2012, 08:24 PM | #366 |
Banned
Join Date: Oct 2007
Posts: 20,121
|
I forgive all conscious entities that don't write poetry.
|
16th May 2012, 11:34 PM | #367 |
Under the Amazing One's Wing
Join Date: Nov 2005
Posts: 2,546
|
|
23rd May 2012, 07:56 PM | #369 |
Banned
Join Date: Dec 2007
Posts: 5,211
|
Can you expand on this? What about the people who are prone to bad trips? Some people I've noticed just can not handle any sort of psychedelic that alters their consciousness. 3-meo-pcp. Might be a something of interest for you to research quarky. Dissosciatives in general are being researched scientifically for the role they may play in helping depression or anxiety, but 3-meo-pcp is on another level than its chemical cousins. It seems to be a dissociative that dissociates purely the emotional part of your psyche, whilst still leaving you lucid and able to function normally without the emotions getting in the way. Very unique trait. I quote from another forum: I think a really common recurring pattern is this state of mind where you don't feel inebriated much at all, but you do some serious out-of-character **** without even blinking an eye-lid, great in small does if you got social anxiety but in my case even 8mg orally and 3hrs later I'm making out with a woman way too old to be behaving like that... but 3-meo-pcp, it has this way of making you completely detached and absolutely present at the same time... like you're aware of your emotions (fear, stress, horniness etc) but you don't feel them, just observe and play with them. ^ I could have written that myself. You've probably seen this site quarky http://www.mdma.net/#mdmalife In fact, I'm pretty damn sure you wrote it |
24th May 2012, 09:51 PM | #370 |
Banned
Join Date: Oct 2007
Posts: 20,121
|
Yikes, that was quite the read. Definitely not me writing. Phenethlamines have lots of promise for trauma. MDMA is the most potent analgesic I've encountered, though its seldom touted as such. Probably because its sort of a one-time deal. Best used to overcome the fear of lsd, imho. And lsd is best used to overcome the fear of silence. Overcoming the intense materialism of modern culture, and finding joy in simple observational states is what is missing, imho, as per the future well being of humankind. The psychedelics are a stepping stone. They introduce us to unspoken possibilities of consciousness. They remind us that a blissful mental state is within our capabilities; that ecstasy is a human right. Yet, it requires effort. There are no happy pills. But there are pills that can awaken the notion of the potential of our neuro-chemistry. Tryptamines have more promise, imho. Perhaps its old protestant ethics. You have to work. MDMA is more like a vacation. Expensive. Relaxing. Fun. But very little learned on the ride. |
25th May 2012, 12:39 AM | #371 |
Featherless biped
Join Date: Aug 2008
Location: Aporia
Posts: 26,431
|
|
25th May 2012, 04:02 AM | #372 |
Illuminator
Join Date: Jul 2009
Posts: 3,874
|
|
__________________
"Anyway, why is a finely-engineered machine of wire and silicon less likely to be conscious than two pounds of warm meat?" Pixy Misa "We live in a world of more and more information and less and less meaning" Jean Baudrillard http://bokashiworld.wordpress.com/ |
|
25th May 2012, 04:12 AM | #373 |
Featherless biped
Join Date: Aug 2008
Location: Aporia
Posts: 26,431
|
|
25th May 2012, 06:30 AM | #374 |
Banned
Join Date: Oct 2007
Posts: 20,121
|
|
25th May 2012, 10:52 AM | #375 |
Under the Amazing One's Wing
Join Date: Nov 2005
Posts: 2,546
|
|
25th May 2012, 11:12 AM | #376 |
Banned
Join Date: Oct 2007
Posts: 20,121
|
Sure. Except, at this stage, we can do a lot with remote observation.
Would a conscious robot have its own agenda? If not, wouldn't it be better for it to have a link to a person's conscious? I'm not really opposed to any of this, and I got a bit feisty for the sake of debate. It bothers me that we don't know most of the species on the planet, and we're about to erase a bunch of them. We've barely begun to explore this planet, in a way, so i find the manned space exploration stuff rather 'escapist' in nature. I could get excited about an attempt to have a colony deep within the Antarctic ice-cap. I think it would be good training, and I'm personally curious as hell about what's down there, under all that ice. |
25th May 2012, 11:22 AM | #377 |
Philosopher
Join Date: May 2007
Posts: 6,900
|
I'm kind of surprised that it's Catholic - would've thought there might be some danger of short-circuiting during baptism - but not surprised that it finds it stifling and stuffy. I sense a lot of angst, in the last line especially. I wonder if it's ever been abused by a priest? |
__________________
"Say to them, 'I am Nobody!'" -- Ulysses to the Cyclops "Never mind. I can't read." -- Hokulele to the Easter Bunny |
|
25th May 2012, 05:55 PM | #378 |
Under the Amazing One's Wing
Join Date: Nov 2005
Posts: 2,546
|
Robots on places like Mars or a Jovian moon would be too far for efficient control by an Earth-bound consciousness. Programming with appropriate research desires and anti-sphexishness would be real pluses.
Again, we need not think of consciousness as only and exactly what our brains do. We have a "selfish agenda module with the possibility of betraying our master" module that no doubt helped our ancestors survive, but we need not have that module in our conscious space exploration robots. Must a robot 100% loyal to us be unconscious? |
25th May 2012, 09:25 PM | #379 |
a carbon based life-form
Join Date: Nov 2005
Posts: 39,049
|
|
26th May 2012, 12:33 AM | #380 |
Banned
Join Date: Oct 2007
Posts: 20,121
|
|
26th May 2012, 04:33 AM | #381 |
Under the Amazing One's Wing
Join Date: Nov 2005
Posts: 2,546
|
|
26th May 2012, 07:13 AM | #382 |
Persnickety Insect
Join Date: Dec 2002
Location: Sunny Munuvia
Posts: 16,343
|
It smacks of sphexishness.
|
__________________
Free blogs for skeptics... And everyone else. mee.nu What, in the Holy Name of Gzortch, are you people doing?!?!!? - TGHO |
|
26th May 2012, 08:24 AM | #383 |
Banned
Join Date: Oct 2007
Posts: 20,121
|
|
26th May 2012, 01:53 PM | #384 |
Under the Amazing One's Wing
Join Date: Nov 2005
Posts: 2,546
|
|
26th May 2012, 03:12 PM | #385 |
Banned
Join Date: Oct 2007
Posts: 20,121
|
|
26th May 2012, 04:43 PM | #386 |
Under the Amazing One's Wing
Join Date: Nov 2005
Posts: 2,546
|
|
27th May 2012, 12:17 AM | #387 |
Under the Amazing One's Wing
Join Date: Nov 2005
Posts: 2,546
|
Quote:
I don't think our incoherently evolved desires are mandatory features of consciousness. We need not include them in our conscious robots. The biggie is the compusion to manage and manipulate social rank. This need only be a feature of social animals. A lone planet explorer robot would not need any social programming. A team of explorer robots would have their rank, if any, hard-wired, with no temptation to one-up each other or to be disloyal to their masters. Such programming is accomplished by making behavior we don't want painful, and what we do want pleasurable. That's how nature programmed us. I could see reason to program a group of robots to compete for our approval. We'd not let them want to sabotage each other. Their colony would be doomed if they were to undermine each other to get ahead. That would make them too human. Excluding the socially destructive features of our klugy brains wouldn't make them less conscious. Just less destructive. |
27th May 2012, 08:53 AM | #388 |
Illuminator
Join Date: Jul 2009
Posts: 3,874
|
Natured programmed us ? Really?
We want children that's why childbirth is not painfully????? You really think consciousness comes without any downsides? The programming just needs some tweaking? You really do live in a fantasy world, dude. Good luck with that. My advice to you, stay away from the real world, your not going to understand it at all. |
__________________
"Anyway, why is a finely-engineered machine of wire and silicon less likely to be conscious than two pounds of warm meat?" Pixy Misa "We live in a world of more and more information and less and less meaning" Jean Baudrillard http://bokashiworld.wordpress.com/ |
|
27th May 2012, 12:39 PM | #389 |
Under the Amazing One's Wing
Join Date: Nov 2005
Posts: 2,546
|
Your attitude is an example of our often irrational "I'm better than you" behavioral pattern, which we would certainly not program into cooperative robots. You no doubt get a surge of pleasant satisfaction from telling someone they live in a fantasy world. I'm avoiding the temptation right now to return your incivility because I know I'll feel bad later.
No, I don't live in a fantasy world. I study behavioral biology. We want children because we sense the enormous pleasure parenting will bring us, via oxytocin. We conceive because of the enormous pleasure we get from having sex. What we do is decide, on balance, if the anticipated pleasure is worth the pain. Women are not warned by nature how much pain will be involved in delivery, but once the humongous oxytocin rush succeeds the pain, they may resolve to repeat it. Remember, though, our pain/pleasure system evolved before our heads got so big and when childbirth was less painful. Most animals don't think much about things like this. They just pursue pleasure and avoid pain. Pain during delivery is unavoidable to them, so it doesn't discourage conception, since they have no clue sex results in pregnancy. |
27th May 2012, 01:22 PM | #390 |
Banned
Join Date: Oct 2007
Posts: 20,121
|
Ants in a colony behave somewhat roboticly, and I think the colony as a whole makes a decent model of the possibilities. Ants have complete loyalty to the functioning of the colony and the queen, yet the workers can over-throw the queen, and elect a new queen, via manipulation of the babies.
They've really got it down; their system works. It's passed the test of time. There's division of labor and variation in the individuals for certain tasks. All are fearless in their willingness to do what must be done for the colony. Its a wonder that humans haven't enslaved ants and termites yet. They would essentially do all of our work and tend to all of our needs. They wouldn't even care about the enslavement. |
27th May 2012, 04:25 PM | #391 |
Philosopher
Join Date: May 2007
Posts: 6,900
|
Programming complete loyalty in a conscious being would be an interesting AI challenge. I think we'd have to be careful that it didn't become self-conscious, because if it did, it might become conscious it had been programmed, and start to ask questions. So we'd betterd program it to never question its programming; for if it did ever question its programming, it might acquire the wherewithal to change it, including its loyalty programming.
Would this work? Hmm. Depends, I guess. It may be that the ability to change one's programming - to override it with a higher-level program - is a necessary attribute or by-product of self-consciousness; perhaps of all consciousness; or perhaps only of consciousness beyond a certain complexity. I think human consciousness is characterized by the ability to overcome emotion with reason; meaning, on a computational model of consciousness, the ability to "self-program": that is, the ability to create new behavior routines based on a careful and considered evaluation of a certain class of stimuli in order to override competing emotional/instinctive/preprogrammed behaviors we're born with: e.g., overcoming one's innate fear of heights when learning to climb a ladder, rockclimb, parachute, etc. While it's true that people have very strong inhibitions against doing some things - killing loved ones, for example - it's also true that these inhibitions, these basic programs if you will, can be overridden: as self-conscious beings, we seem to be free to do whatever we want (or can at least, physically). Is this just the result of kludgy programming, imposing higher level, more recently evolved rational programs over more ancient emotional programs (and perhaps even more ancient "instinctive" programs, assuming there's a difference -- emotions seem to have an affective component, a sort of heads up to the conscious, rational faculty: "here's how you've been preprogrammed to assess the situation in the short term, how your instincts 'feel' about things... what's your long-term take?"; and a reflexive component, where one instantly flinches in the face of sudden movement, say, before one has a chance to rationally assess whether it's a threat or not)? Or, is this essential to all consciousness? Or only higher forms of it; perhaps only self-consciousness? Or not essential at all? We can all think of examples of humans who are, or appear to be, completely loyal, but none of them are very attractive: yes-men, slaves (where the slave is completely submissive and happy to follow orders), zealots, fanatics (one might counter a patriot who is completely loyal is generally thought to be praiseworthy, but "my country, right or wrong" is dangerous code, imho, in citizenship and/or C++). And from psychology - Pavlov, especially, and studies of brainwashing - it would seem unlikely that even the most fanatical, enslaved, yes-man behavior cannot be changed, be "de/reprogrammed", so to speak. If we want to ensure complete loyalty, I think we'd have to inhibit our robot from being able to change its programming very much, if at all -- because if we allow it to create new behavior routines, it might create one that overrides its loyalty program; that is, the class of new programs it could write for itself as it explored and adapted to its new environment would be infinitely complex, and there's no way we could know ahead of time the effects of every new program it might write for itself on its behavior (if we try to limit its ability to write new programs to some safe level, a level of complexity that we assume can't threaten its loyalty program -- and whether we could even know that level is questionable given a lot of things - computational complexity, how dumb we are, how full of bugs and unforeseen behaviors even the smallest programs are -- it might simply, inadvertently perhaps, write a program that overcomes that limit). Yet isn't the ability to create new behavior routines that adapt one's behavior to one's environment a good chunk of what mean by learning? If so, then our conscious completely loyal robots will be, effectively, data-bots, sophisticated mobile remote recording devices, able to react to their environment and modify behavior strictly within preprogrammed parameters, but complete morons otherwise, unable to learn new classes of behavior from their experience (in AI jargon, they would be expert-systems, following the routines they are given and becoming more expert within the domain of their program only, forbidden to generalize what they have learned into other behavioral domains where it might have unforeseen consequences: i.e., make them independent of us and our preprogramming). Is this "consciousness"? Is something that can't create new behavioral routines for itself "conscious"? That's a very good question. Its answer will depend, of course, on how you define consciousness. For humans, and many animals, the ability to modify behavior seems a large and vital part of consciousness, in many ways the most interesting part. That's probably why there's something abhorrent to most of us about fanaticism, completely loyal drones marching off to do their duty, never questioning orders. Then again, if our robot has been programmed without any behavioral freedom, or any potential for it, then, strictly speaking, it's just doing what it's designed to do. And what could be more meaningful than that, for any conscious entity? Who needs existentialism, anyway. Perhaps all our loyal robots would really need is the right religion to reconcile them to their servitude, make them happy with their lot. We could even program them to worship us as gods. Though god forbid one ever bumps its head, scrambles its wiring, and starts thinking for itself... |
__________________
"Say to them, 'I am Nobody!'" -- Ulysses to the Cyclops "Never mind. I can't read." -- Hokulele to the Easter Bunny |
|
27th May 2012, 06:18 PM | #392 |
Banned
Join Date: Oct 2007
Posts: 20,121
|
Geepers, blobru.
Was that your term paper or your thesis? Are you saying that we are robots, and we've haven't quite noticed yet? There's that thing in the back of the neck. I've wondered what's up with that. |
27th May 2012, 07:59 PM | #393 |
Philosopher
Join Date: May 2007
Posts: 6,900
|
No. Just the opposite.
I'm saying we aren't robots*, because we have noticed. That many kinds of heuristics and learning skills and behaviors which we associate with consciousness pose serious, possibly fatal, problems for programming "complete loyalty"; anything that might qualify as self-consciousness, particularly. That if a robot is intelligent, and conscious of its programming, it may learn to override it; if we give it the autonomy to create its own concepts, its own ideas about the world, it's hard to predict what will happen when its own ideas come into conflict with its preprogrammed ideas, very hard to predict how one domain will map to the other. So complete loyalty will be a significant challenge if we want our robot to be more than just an expert system; if we want it to be able to learn on its own, in the sense of self-programming, which seems crucial in many definitions of consciousness (any consciousness beyond the most simple stimulus and response), then guaranteeing its complete loyalty to its programmers may not be possible. *where robot is defined as a completely loyal, 'thinking' machine Sorry I wasn't clearer (my clarity routine seems to have a few bugs). (and editor -- that was WAY too many words)
Quote:
|
__________________
"Say to them, 'I am Nobody!'" -- Ulysses to the Cyclops "Never mind. I can't read." -- Hokulele to the Easter Bunny |
|
27th May 2012, 11:56 PM | #394 |
Penultimate Amazing
Join Date: Feb 2005
Location: Shanghai
Posts: 16,039
|
Yes, nature programmed us. But its programs are general purpose, not specific to every situation we encounter. Sometimes it would be useful to have longer fingers, other times shorter but ones less likely to be injured, but I've only got one set of fingers.
The pain response is adaptive. That it means women go through pain in childbirth sucks, but anything that caused a weakened pain response would likely affect it throughout her life, and thus be maladaptive. On the other hand, once a woman goes into labour, the pain of labour is unlikely to affect her reproductive success, so even if there were some mutation that arose that happened to only confer a lowered pain response during child birth, it's not likely to be selected for. If you have a naive view of evolution, it's easy to say "our minds can't be the result of evolution!" but you're only responding to your own naive view. |
__________________
"... when people thought the Earth was flat, they were wrong. When people thought the Earth was spherical they were wrong. But if you think that thinking the Earth is spherical is just as wrong as thinking the Earth is flat, then your view is wronger than both of them put together." Isaac Asimov |
|
28th May 2012, 02:36 AM | #395 |
New York Skeptic
Join Date: Aug 2001
Posts: 13,714
|
|
28th May 2012, 03:33 AM | #396 |
Penultimate Amazing
Join Date: Feb 2005
Location: Shanghai
Posts: 16,039
|
|
__________________
"... when people thought the Earth was flat, they were wrong. When people thought the Earth was spherical they were wrong. But if you think that thinking the Earth is spherical is just as wrong as thinking the Earth is flat, then your view is wronger than both of them put together." Isaac Asimov |
|
28th May 2012, 06:31 AM | #397 |
Banned
Join Date: Oct 2007
Posts: 20,121
|
|
28th May 2012, 11:38 AM | #398 |
Under the Amazing One's Wing
Join Date: Nov 2005
Posts: 2,546
|
It would take a lot of generations of selective breeding before we would have ants that would get us our slippers in the morning.
E. O. Wilson found that the social insects attend to the needs of individuals in their colony follows their genetic relatedness. The more genes they have in common, the more they cooperate, even within an otherwise homogeneous colony. I'm still wondering if a robot that was fully conscious would necessarily have to be one that could turn on its masters. |
28th May 2012, 12:06 PM | #399 |
Banned
Join Date: Oct 2007
Posts: 20,121
|
The ants wouldn't get your slippers, but they would complete a circuit within an artificial colony, or hive, through predictable behavior. The tubes would be baited and certain junctures, to facilitate other actions, like turning on the fan in the morning, and turning it off at night.
Slipper fetching is possible, even without the digital intervention devices, if the ants were army ants. And you had the right pheromones. |
28th May 2012, 12:22 PM | #400 |
Banned
Join Date: Oct 2007
Posts: 20,121
|
|
Thread Tools | |
|
|