hard Problem of Consciousness , special pleading?

Dancing David

Penultimate Amazing
Joined
Mar 26, 2003
Messages
39,700
Location
central Illinois
Hello all,
The Hard Problem of Consciousness
https://en.wikipedia.org/wiki/Hard_problem_of_consciousness

is definitely a hoary old chestnut for the R&P forums and I feel it is a case of special pleading. Among many other fallacies IE the Loose Problem of Definition.

People don't argue about the Hard Problem of Digestion, although the functions of teh liver and metabolism are about as understood as neuroscience.

People don't argue about the Hard Problem of...
- Earthquakes
- Volcanic Eruption

:D
 
Hello all,
The Hard Problem of Consciousness
https://en.wikipedia.org/wiki/Hard_problem_of_consciousness

is definitely a hoary old chestnut for the R&P forums and I feel it is a case of special pleading. Among many other fallacies IE the Loose Problem of Definition.

People don't argue about the Hard Problem of Digestion, although the functions of teh liver and metabolism are about as understood as neuroscience.

People don't argue about the Hard Problem of...
- Earthquakes
- Volcanic Eruption

:D

Obviously the problem of definition, and its corollary, the fallacy of equivocation, make all these discussions difficult.
As I consider it, if a tiny stimulating electrode inserted in the brain can generate conscious content (an experience) by driving activity in an interconnected circuit of neurons then what is left to explain? The activity in the circuit IS the experience. Considered in this way the "hard problem" is reduced to a complex but not impossible problem of neuroscience, that is, what kind of activity in which circuits is which conscious content? In addition the problem of "consciousness" as a separate entity from brain activity also disappears.

I confess to not having read Chalmers, or Dennette on the topic.

I have read and considered this paper by Bjorn Merker. It has a lot to say about what "consciousness" is, and how and why the brain is designed to have it.
"It is the principal claim of the present target article that the vertebrate brain incorporates a solution to this decision problem, that it takes the general form of a neural analog reality simulation of the problem space of the tripartite interaction, and that the way this simulation is structured constitutes a conscious mode of function. It equips its bearers with veridical experience of an external world and their own tangible body maneuvering within it under the influence of feelings reflecting momentary needs, that is, what we normally call reality. To this end it features an analog (spatial) mobile “body” (action domain) embedded within a movement-stabilized analog (spatial) “world” (target domain) via a shared spatial coordinate system, subject to bias from motivational variables, and supplying a premotor output for the control of the full species-specific orienting reflex"
The tripartite interaction is the interaction between target selection, action selection and motivation (needs/drives).
"...It is not possible
to reap the benefits ... short of finding
some way of interfacing the three state spaces – each multidimensional
in its own right – within some common
coordinate space (decision framework) allowing their separate
momentary states to interact with and constrain one
another."

He is suggesting that the interaction of 1)needs and drives, 2) location of the individual in the world with relation to surrounding objects, and 3)the position of the body, all must be represented in a single overlapping multi-dimensional space in order to allow rapid and accurate targeting for the organism to survive. This, he suggests, is the evolutionary origin of "consciousness" (neural analog reality simulation).

Its quite an interesting read.

I can't make the link work... Its called Consciousness without a cortex, and there is a full pdf available through google scholar...
 
Last edited:
The more I'm exposed to the "hard problems" of philosophy, the more I'm inclined to think that something doesn't become a problem until it needs to be solved.

Nothing about the "hard problem" of consciousness in philosophy suggests to me that there's something there that needs to be solved.

The hard problem of earthquakes, on the other hand, is a real problem.

The hard problem of consciousness is right up there with the hard problem of Star Wars hyperdrive, in terms of things that are interesting to a certain kind of nerd, but pose no real problem, nor offer any practical result.
 
Last edited:
The more I'm exposed to the "hard problems" of philosophy, the more I'm inclined to think that something doesn't become a problem until it needs to be solved.

Nothing about the "hard problem" of consciousness in philosophy suggests to me that there's something there that needs to be solved.

The hard problem of earthquakes, on the other hand, is a real problem.

The hard problem of consciousness is right up there with the hard problem of Star Wars hyperdrive, in terms of things that are interesting to a certain kind of nerd, but pose no real problem, nor offer any practical result.

Would you go through a transporter, or do you think it's suicide?
 
Would you go through a transporter, or do you think it's suicide?
Exactly my point! The hard problem of consciousness is just as much of a "problem" as the hard problem of Star Trek transporters.

There are three kinds of people who use transporters: James Tiberius Kirk, who can't hear your moaning about "consciousness" over the sound of how awesome he is. Bones McCoy, who bitches and moans about "consciousness", but uses the teleporter all the time anyway. And Spock, who coldly, logically, doesn't give a ****.

There's probably a personality quiz out there somewhere that tells you what kind of Star Trek character you'd most closely resemble, as you use the teleporter just the same as everyone else because you realize that the "hard problem of consciousness" doesn't have any practical applications, and the teleporter does.
 
Exactly my point! The hard problem of consciousness is just as much of a "problem" as the hard problem of Star Trek transporters.

There are three kinds of people who use transporters: James Tiberius Kirk, who can't hear your moaning about "consciousness" over the sound of how awesome he is. Bones McCoy, who bitches and moans about "consciousness", but uses the teleporter all the time anyway. And Spock, who coldly, logically, doesn't give a ****.

There's probably a personality quiz out there somewhere that tells you what kind of Star Trek character you'd most closely resemble, as you use the teleporter just the same as everyone else because you realize that the "hard problem of consciousness" doesn't have any practical applications, and the teleporter does.

It's kind of interesting, though, isn't it? And the hard problem does have practical applications. How are we going to treat AI when it reaches our level?
 
It's kind of interesting, though, isn't it? And the hard problem does have practical applications. How are we going to treat AI when it reaches our level?

Sure, it's interesting. Just not important. Not really a problem, if you get my drift.

And I'm sure that however we end up treating peer AIs, it will be practically, not philosophically. I mean, look at slavery in the US. We didn't need to solve the "hard problem of consciousness" to determine that black people were human beings entitled to the same treatment from their peers as everyone else.
 
Last edited:
Sure, it's interesting. Just not important. Not really a problem, if you get my drift.

And I'm sure that however we end up treating peer AIs, it will be practically, not philosophically.

How are you going to practically determine how to treat peer AI's without considering whether they're conscious or not? Isn't that an important factor in how you treat something? We don't care about stepping on ants, but we buy dolphin-safe tuna. Whether a thing is conscious, and to what degree that consciousness is developed, is important to us.
 
How are you going to practically determine how to treat peer AI's without considering whether they're conscious or not? Isn't that an important factor in how you treat something? We don't care about stepping on ants, but we buy dolphin-safe tuna. Whether a thing is conscious, and to what degree that consciousness is developed, is important to us.

Sure. And most of us have developed heuristics to address that, without ever bothering to consider the philosophical question of what consciousness really is, or what it means. We didn't need to solve the "hard problem" to decide to be nicer to dolphins.

ETA: You seem to believe that if we ever encounter an alien intelligence, we won't be able to figure out how to deal with it, until we understand what consciousness really is, or something. But the fact is, we encounter alien intelligences all the time, and we have no problem at all figuring out how to deal with them, even though we don't have any *********** clue what consciousness really means.
 
Last edited:
Sure. And most of us have developed heuristics to address that, without ever bothering to consider the philosophical question of what consciousness really is, or what it means. We didn't need to solve the "hard problem" to decide to be nicer to dolphins.

Because they're similar creatures to us. AI is a whole nother thing.

ETA: You seem to believe that if we ever encounter an alien intelligence, we won't be able to figure out how to deal with it, until we understand what consciousness really is, or something. But the fact is, we encounter alien intelligences all the time, and we have no problem at all figuring out how to deal with them, even though we don't have any *********** clue what consciousness really means.

It we encounter biological entities, we'll assume they're conscious. If we encounter machine intelligences who assure us they're conscious, we'll wonder. We have experience dealing with conscious biological beings. We have no experience dealing with conscious machines.

I know you think philosophy is a waste of time, but these ethical questions are going to have to be addressed. You're seeing ethical dilemmas pop up in automated vehicles too. How much should the car value the driver's life? Is the driver worth five pedestrians if the car has to avoid a deadly head-on collision and plow into a family walking by the side of the road? Two pedestrians? Ten? None?
 
Because they're similar creatures to us. AI is a whole nother thing.



It we encounter biological entities, we'll assume they're conscious. If we encounter machine intelligences who assure us they're conscious, we'll wonder. We have experience dealing with conscious biological beings. We have no experience dealing with conscious machines.

I agree.


I know you think philosophy is a waste of time, but these ethical questions are going to have to be addressed.

Technically, they don't HAVE to be addressed. I just think it's fun to address them.


You're seeing ethical dilemmas pop up in automated vehicles too. How much should the car value the driver's life? Is the driver worth five pedestrians if the car has to avoid a deadly head-on collision and plow into a family walking by the side of the road? Two pedestrians? Ten? None?

That's a completely, totally different topic.
 
Would you go through a transporter, or do you think it's suicide?

I do.

Exactly my point! The hard problem of consciousness is just as much of a "problem" as the hard problem of Star Trek transporters.

There are three kinds of people who use transporters: James Tiberius Kirk, who can't hear your moaning about "consciousness" over the sound of how awesome he is. Bones McCoy, who bitches and moans about "consciousness", but uses the teleporter all the time anyway. And Spock, who coldly, logically, doesn't give a ****.

There's probably a personality quiz out there somewhere that tells you what kind of Star Trek character you'd most closely resemble, as you use the teleporter just the same as everyone else because you realize that the "hard problem of consciousness" doesn't have any practical applications, and the teleporter does.

I would not be using the teleporter before I'd sussed out how a mind/consciousness exists in relationship to spacetime first.

I'm also techno-pessimistic about the possibilities of up/downloading one's consciousness to a computer, even if technology progresses rapidly for 10K more years. I think your mind is almost definitely physically tied to your material brain in a way that can't be broken, even in theory.
 
I think the phrase "hard problem" is basically a misnomer. Consciousness is extremely mysterious, but that doesn't make it a problem in need of solving any more than, like, the possible alien megastructure that makes that star blink sometimes.
 
Because they're similar creatures to us. AI is a whole nother thing.
I bet you it's not.

It we encounter biological entities, we'll assume they're conscious. If we encounter machine intelligences who assure us they're conscious, we'll wonder. We have experience dealing with conscious biological beings. We have no experience dealing with conscious machines.
I don't think that will be a problem. We have plenty of experience imagining conscious machines. And we seem to be extremely good at coming up with expedient solutions to pressing problems without getting bogged down in philosophical concerns.

And so what if we have doubts? Humans have doubts about a lot of stuff. Humans are also adept at pressing on in spite of their doubts.

I know you think philosophy is a waste of time, but these ethical questions are going to have to be addressed. You're seeing ethical dilemmas pop up in automated vehicles too. How much should the car value the driver's life? Is the driver worth five pedestrians if the car has to avoid a deadly head-on collision and plow into a family walking by the side of the road? Two pedestrians? Ten? None?
Changing horses. You were arguing that we needed to solve the problem of consciousness in order to solve these ethical problems. In fact, we're actually pretty good at solving ethical problems without getting bogged down in philosophical concerns.

Self-driving cars are a tool, like table saws or hydraulic presses or airplanes. And it turns out that we are actually pretty good at pricing our tools in units of human lives. We'll end up dealing with self-driving cars the same way we deal with self-loading firearms. It won't be pretty. It won't be entirely consistent. It won't be philosophical. But it will work.

I think philosophy is a waste of time in part because the problems you say we need philosophy to solve, it turns out we actually are solving without philosophy.
 
Last edited:
Because they're similar creatures to us. AI is a whole nother thing.



It we encounter biological entities, we'll assume they're conscious. If we encounter machine intelligences who assure us they're conscious, we'll wonder. We have experience dealing with conscious biological beings. We have no experience dealing with conscious machines.

I know you think philosophy is a waste of time, but these ethical questions are going to have to be addressed. You're seeing ethical dilemmas pop up in automated vehicles too. How much should the car value the driver's life? Is the driver worth five pedestrians if the car has to avoid a deadly head-on collision and plow into a family walking by the side of the road? Two pedestrians? Ten? None?
People are machines, or are people with artificial values, artificial joints, camera instead of eyes no longer people?
 
That's probably what he meant.

Anyway, only an artificial brain would bring someone's personhood into question in my mind.
 
That's probably what he meant.

Anyway, only an artificial brain would bring someone's personhood into question in my mind.

And what about someone with brain prosthetics? At what point do you consider they have an "artificial" brain?
 
And what about someone with brain prosthetics? At what point do you consider they have an "artificial" brain?

Things get really really tricky there are some point. I don't even know at which point. It kinda makes my head hurt trying to suss that sort of stuff out. LOL
 
The reason for asking is that you seem to consider there is a difference in essence that comes from a machine that is built up from proteins and one that uses electronics? What is the special feature of machines that are built from proteins compared to ones built with electronics/non-proteins?
 
Things get really really tricky there are some point. I don't even know at which point. It kinda makes my head hurt trying to suss that sort of stuff out. LOL

I think it is a good way to help tease out many of our assumptions when talking about these matters.
 
The reason for asking is that you seem to consider there is a difference in essence that comes from a machine that is built up from proteins and one that uses electronics? What is the special feature of machines that are built from proteins compared to ones built with electronics/non-proteins?

I assume that other machines built like me experience the core features of cognition in ways at least somewhat similar to how I do. There's no reason to assume otherwise.

Something without a brain - without any of the mechanics we share with other humans and animals, might or might not be able to experience anything period, much less as we do.
 
I think it is a good way to help tease out many of our assumptions when talking about these matters.

Absolutely. I really just don't know. MAYBE if everything about human consciousness/cognition was understood, I'd have a better idea about that, but I'm currently trying to figure out something complicated with what feels like little information.
 
Would you go through a transporter, or do you think it's suicide?

What's a "transporter" - you would need to define what one is, how it is meant to work (which of course has to be in line with how we know the world works so no Blake's 7 transporters*) and then people can decide if they like the idea of using one.



*I'm not saying we have to be able to build one or solve the practical problems just that how it is proposed to work has to be consistent with how we know the world works else all we are doing is asking people would they rather click their ruby slippers or say Apparition.
 
What's a "transporter" - you would need to define what one is, how it is meant to work (which of course has to be in line with how we know the world works so no Blake's 7 transporters*) and then people can decide if they like the idea of using one.



*I'm not saying we have to be able to build one or solve the practical problems just that how it is proposed to work has to be consistent with how we know the world works else all we are doing is asking people would they rather click their ruby slippers or say Apparition.

The one from Star Trek is described by wiki as a thing that will "convert a person or object into an energy pattern (a process called dematerialization), then "beam" it to a target, where it is reconverted into matter (rematerialization)."
 
The one from Star Trek is described by wiki as a thing that will "convert a person or object into an energy pattern (a process called dematerialization), then "beam" it to a target, where it is reconverted into matter (rematerialization)."

Yeah, that's called magic! That's why I said we had to exclude such things if we wanted to have a discussion about anything a hypothetical transporter can tell us about an actual real "problem". Otherwise we are just discussing fiction.
 
Yeah, that's called magic! That's why I said we had to exclude such things if we wanted to have a discussion about anything a hypothetical transporter can tell us about an actual real "problem". Otherwise we are just discussing fiction.

Ha! I found this, which is interesting and all, but it's not akin to the sci-fi people-teleporters.

So, I dunno.
 
If one thinks it's suicide, it doesn't follow that they think the hard problem is a thing.
Not at all, we've had many discussions about transporters of various types over the years and some of them I wouldn't use, and I don't think the HPC even exists!
 
Not at all, we've had many discussions about transporters of various types over the years and some of them I wouldn't use, and I don't think the HPC even exists!
Which types would you not use, and why?
 
Kirk: Energize!
...
Kirk: I said energize!
Transporter chief: I already did captain.
Kirk: Then why am I still here?
Transporter chief: You aren't still here. You beamed down to the planet.
Kirk: What???
Transporter chief: You see, the transporter isn't quite as automatic as you were led to believe. (Pulls phaser.) Now follow me to Matter Reclamation.
 
Kirk: Energize!
...
Kirk: I said energize!
Transporter chief: I already did captain.
Kirk: Then why am I still here?
Transporter chief: You aren't still here. You beamed down to the planet.
Kirk: What???
Transporter chief: You see, the transporter isn't quite as automatic as you were led to believe. (Pulls phaser.) Now follow me to Matter Reclamation.

I probably wouldn't use that kind of transporter.

On the other hand, Orson Scott Card has a short story, "Fat Farm", that, well... best if you read it yourself.
 
The more I'm exposed to the "hard problems" of philosophy, the more I'm inclined to think that something doesn't become a problem until it needs to be solved.

Nothing about the "hard problem" of consciousness in philosophy suggests to me that there's something there that needs to be solved.

The hard problem of earthquakes, on the other hand, is a real problem.

The hard problem of consciousness is right up there with the hard problem of Star Wars hyperdrive, in terms of things that are interesting to a certain kind of nerd, but pose no real problem, nor offer any practical result.
For the scientifically minded it's not a problem. But for those that aren't inclined to science use it as a loophole for non local consciousness theory.
 
Which types would you not use, and why?

Any of them that destructively scans me (DaratA). Yes the person (DaratB) recreated at t'other end will feel the same way I did at the time of scan, but he won't be me, DaratA was killed and reduced to a pile of ash!

When discussing this in the past I have said that as long as it is done out of sight the convenience of "teleportation" is such that we (as society) would simply not think about DaratA's ashes being swept away as DaratB steps out of the teleporter. We (humanity) are remarkably good at ignoring or simply not thinking about things when it is more convenient not to do so.
 
Last edited:
Any of them that destructively scans me (DaratA). Yes the person (DaratB) recreated at t'other end will feel the same way I did at the time of scan, but he won't be me, DaratA was killed and reduced to a pile of ash!

When discussing this in the past I have said that as long as it is done out of sight the convenience of "teleportation" is such that we (as society) would simply not think about DaratA's ashes being swept away as DaratB steps out of the teleporter. We (humanity) are remarkably good at ignoring or simply not thinking about things when it is more convenient not to do so.

That's essentially my take, too.

Additionally, I don't think my mind uploaded to a computer would be "the me" I experience from a first person perspective, either. It would just be a cognitive clone in a computer that thinks it's me.
 

Back
Top Bottom