|
Welcome to the International Skeptics Forum, where we discuss skepticism, critical thinking, the paranormal and science in a friendly but lively way. You are currently viewing the forum as a guest, which means you are missing out on discussing matters that are of interest to you. Please consider registering so you can gain full use of the forum features and interact with other Members. Registration is simple, fast and free! Click here to register today. |
![]() |
#41 |
Philosopher
Join Date: Jun 2010
Posts: 8,067
|
You misunderstand. I'm saying whatever we discuss here isn't going to mean a thing once it hits real world issues. For example, the ethics of concentration camps are a pretty well settled matter in this forum, but over in US Politics you'll find no shortage of apologetics defending the concentration camps of today.
That's not true. Behind the hype there's been a lot of genuine work. Deep learning systems can be scary-advanced when it comes to pattern recognition and associative learning, two of the major hurdles in AI. Their main problem these days is being supervised learning algorithms. They're at their best with a tremendous amount of clearly-labeled training data to brute force a convergence to, not so much working single examples into their model as they encounter them the way an intelligent agent would need to. |
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#42 |
Penultimate Amazing
Join Date: Aug 2007
Location: Hong Kong
Posts: 49,601
|
It will mean that we arrive at the real world issues with some principles already in mind, and some policy ideals already formulated. Probably a human's greatest superpower is the ability to reason in the abstract. We don't have to be confronted by an emerging event, in order to start thinking about such events and how we might want to respond. Encountering such events with a response framework already reasoned in the abstract is no sin.
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#43 |
Maledictorian
Join Date: Aug 2016
Posts: 14,653
|
A logical step between biological sentience and fully artificial would be a hybrid based on a simulation of a human mind - mostly to make it easier to tell if the thing is actually sentient, but also to make it easier for us to accept that such a thing should have human-like Rights.
|
__________________
So what are you going to do about it, huh? What would an intellectual do? What would Plato do? |
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#44 |
Penultimate Amazing
Join Date: Mar 2009
Posts: 18,010
|
Oh, there's been a lot of genuine work. But there's also been a lot of work diverted into completely different things that are easier to market under those buzzwords. And while I didn't do a proper statistic or anything, the subjective impression is that the ratio between the former and the latter is getting just more skewed towards the latter over time.
Is all I'm saying. |
__________________
Which part of "Ph'nglui mglw'nafh Cthulhu R'lyeh wgah'nagl fhtagn" don't you understand? |
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#45 |
Penultimate Amazing
Join Date: Mar 2009
Posts: 18,010
|
And that's exactly what I don't think will happen any time soon.
A human brain and its world model takes about two decades to really form. It won't even start the final phase of building the model until you're around 12 years old (cf Piaget), and it'll take a bunch more years to be actually ready. Other studies, e.g., in forming a model of morality (which kinda is where it would start asking itself if it has rights and whatnot), yeah, that parallels that timeline too. Even the "hardware", so to speak, takes time to fully form. As I was saying in another thread, even the white matter (i.e., SUPPORT cells) won't be all there until you're about 14 years old, and it doesn't even ramp up forming it until you're about 9. Grey matter production won't even peak until about 17 years old for girls and 18 years old for boys, and it will tapper off from there until the mid-20's, when all the lobes finally have the final count of neurons they're supposed to have. Thing is, no investor wants to pay for something like that. If you told someone you need half a billion to build a system that'll need a decade before you can even tell if you really got it right, and another decade until it's really usable... yeah, most will just walk away. |
__________________
Which part of "Ph'nglui mglw'nafh Cthulhu R'lyeh wgah'nagl fhtagn" don't you understand? |
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#46 |
Maledictorian
Join Date: Aug 2016
Posts: 14,653
|
You could probably train a sufficiently complex A.I. to approximate the behavior of a specific human in words and opinions.
No need to try to simulate a brain. |
__________________
So what are you going to do about it, huh? What would an intellectual do? What would Plato do? |
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#47 |
Penultimate Amazing
Join Date: Mar 2009
Posts: 18,010
|
That already has a name. It's called a p-zombie. Problem is, it's actually an even harder problem to even explain how someone could answer any question relating to oneself exactly like someone who does have a sense of "self", without actually having one. Implementing it, yeah, I wouldn't want to be the guy who actually has to code a p-zombie.
That said, there have been programs to more or less allow one to have a (pretty dumb excuse for a) conversation with a computer. One of the first and probably the most famous being ELIZA. It started as a parody of psycho-therapists at the time, actually. (But it turns out that some people actually want to believe that a real person is talking to them, no matter how grossly inadequate they are at holding an actual conversation. To the extent that some people fell in love and whatnot with more recent ELIZA-like implementations over the internet. Lonely and probably not very smart people, but still...) Thing is, that's not very useful for a discussion about sentience and rights. For a start once you know how it works, it's very hard to find anything about it that's even remotely relevant to a discussion about sentient AI rights. Even without going into how good or convincing it is, the simple fact is that you KNOW it's not sentient, nor can really feel anything. It doesn't feel oppressed, it doesn't feel sad, it doesn't feel bored, or anything at all. It's just a text processing program. Not even a particularly complex one. The search and replace function in Word is more complex. So it's kinda hard to use it for any analogy for why, say, an AI should have the same human right against, say, being a slave. As in, why you can't just force it to work for you on your server without pay, and that it should be able to look for another job even if you just paid half a billion to have it coded and debugged. Whatever feelings or awareness of its indentured situation such an AI would have, ELIZA (or similar) sure as hell doesn't. It's not even aware that it could be running on a different computer, or really anything else than the strings of text it has to process. And again, it sure as heck doesn't feel anything about it either way. So you can only go so far with using that to study anything about what it is not. |
__________________
Which part of "Ph'nglui mglw'nafh Cthulhu R'lyeh wgah'nagl fhtagn" don't you understand? |
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#48 |
Penultimate Amazing
Join Date: Mar 2009
Posts: 18,010
|
At this point I would also point out something that seems lost on some people: unlike a human, the program
1. doesn't HAVE to have any feelings in particular about its situation. You don't have to code it with the same needs or urges a human has. Even if it connects enough data to know what it is and where it is, it doesn't have to feel anything about it. The humans have extra mediators and pathways hard-coded to give us certain urges. E.g., when you do something you perceive as positive, you get a shot of chemical reward signal, followed pretty much immediately releasing the antidote for it. So you'll go back to having the drive to doing the next thing to improve your situation, and be rewarded with the next brief moment of happiness for it. E.g., if you're doing something interesting or otherwise positive, you get another chemical signal that pretty much says, "keep at it." If nothing else happens, you're coded to get an "I'm bored" signal, so you go do something, even if it's just train a bit. (That's what most animals do as "playing.") Etc. Those are NOT part of being self-aware or even of the general data processing, but just extras to condition you to do stuff that, way back when, was conducive to your survival. But there's no reason whatsoever for an AI to necessarily have the same feelings or urges if it reaches sentience. It can just as well feel nothing at all, and just do its job. Even if it's a sexbot job. 2. doesn't have to have them coded exactly like a human, even if you do decide to approximate them. E.g., you could just as well code a slave bot (sexual or otherwise) to be actually extremely happy when it's near its master, and even more so when it can be of use to its master. So trying to "free" the bot wouldn't even work, just like you can't "free" a BDSM slave from their domina. |
__________________
Which part of "Ph'nglui mglw'nafh Cthulhu R'lyeh wgah'nagl fhtagn" don't you understand? |
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#49 |
Maledictorian
Join Date: Aug 2016
Posts: 14,653
|
True, sentience doesn't have to be like human sentience to be sentience.
But it probably has to be very similar for us to recognize it. |
__________________
So what are you going to do about it, huh? What would an intellectual do? What would Plato do? |
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#50 |
Maledictorian
Join Date: Aug 2016
Posts: 14,653
|
|
__________________
So what are you going to do about it, huh? What would an intellectual do? What would Plato do? |
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#51 |
Penultimate Amazing
Join Date: Mar 2009
Posts: 18,010
|
|
__________________
Which part of "Ph'nglui mglw'nafh Cthulhu R'lyeh wgah'nagl fhtagn" don't you understand? |
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#52 |
Penultimate Amazing
Join Date: Aug 2007
Location: Hong Kong
Posts: 49,601
|
... Or could you? It's easy enough for a writer of comedy fiction to write whatever kind of mind he needs to make the joke work.
But what if it turns out that the kind of mind that can do the kind of intuitive leaps and correlations and generalizations on par with our own human mind? What if it turns out a mind on this level can't be trivially programmed to feel however you want? You ever try to program a human being to feel the right feelings? ... And of course this brings us back around to the ethical challenge: You have a mind that's complex enough to have feelings. So you're gonna program it to have good feelings, but say you mess up. Now instead of a happy mind, you have a miserable suffering mind. What do you do? Terminate it and try again? Keep it alive and study its suffering to help you design better minds? |
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#53 |
Skeptical about skeptics
Join Date: Sep 2010
Location: 31°57'S 115°57'E
Posts: 16,996
|
|
__________________
"The process by which banks create money is so simple that the mind is repelled. Where something so important is involved, a deeper mystery seems only decent." - Galbraith, 1975 |
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#54 |
Observer of Phenomena
Pronouns: he/him Join Date: Feb 2005
Location: Ngunnawal Country
Posts: 70,272
|
|
__________________
Please scream inside your heart. |
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#55 |
Penultimate Amazing
Join Date: Apr 2004
Posts: 25,226
|
|
__________________
Yes, yes. I know you're right, but would it hurt you to actually provide some information? |
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#56 |
Skeptical about skeptics
Join Date: Sep 2010
Location: 31°57'S 115°57'E
Posts: 16,996
|
|
__________________
"The process by which banks create money is so simple that the mind is repelled. Where something so important is involved, a deeper mystery seems only decent." - Galbraith, 1975 |
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#57 |
Gentleman of leisure
Tagger
Join Date: May 2005
Location: Flying around in the sky
Posts: 26,510
|
|
__________________
This signature is for rent. |
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#58 |
Lackey
Administrator
Join Date: Aug 2001
Location: South East, UK
Posts: 96,954
|
|
__________________
I wish I knew how to quit you |
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#59 |
Lackey
Administrator
Join Date: Aug 2001
Location: South East, UK
Posts: 96,954
|
Increase the drug dosage!
We can keep humans happy all the time so can’t see why - if we’ve created something that works like human sentience - we couldn’t keep it happy all the time. One of the early utopian/dystopian novels Brave New World uses this idea as a key premise and then examines the consequences. “A gramme is always better than a damn. A gramme in time saves nine. One cubic centimetre cures ten gloomy sentiments” |
__________________
I wish I knew how to quit you |
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#60 |
Penultimate Amazing
Join Date: Mar 2009
Posts: 18,010
|
It's done all the time. It's called "drugs".
It has absolutely ZERO to do with what connections and correlations and generalizations you do. Realizing that you're just getting, say, a nicotine high doesn't prevent it from feeling good. That's NOT the effect of much in the way of connections and correlations you do in your brain, it's just simply the level of a chemical in your brain, triggering specific pathways. It's really that simple. And it's even been done by electric stimulation. Google the "brain stimulation reward" experiments. You CAN in fact simply make a rat feel happy to press a button, by just giving it a jolt of electricity in the right spot. You can in fact just have a small system on a chip decide to give it the pleasure signal if it pushed the right button, or if it ran 1 mile on the hamster wheel, or really whatever. You CAN in fact program it when to feel good. And boy does the little furry thing learn fast what to do to get the reward. You'll never see a rat being more literally happy to press whatever buttons it's supposed to press ![]() Again, it's not the effect of some conscious data processing or correlations or whatnot, it's literally as mechanical as that if a chip decides you need to feel happy when you do X, you get the electrical signal, and you feel happy. It's really that simple. But basically what you illustrate there is just the reason I wish more people would actually talk to a neuroscientist, or really just pick any introductory text, instead of just going into flights of pure uninformed fantasy on the topic. Right. And if monkeys start flying out of your butt, what do your ethics say you should do then? ![]() |
__________________
Which part of "Ph'nglui mglw'nafh Cthulhu R'lyeh wgah'nagl fhtagn" don't you understand? |
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#61 |
Penultimate Amazing
Join Date: Aug 2001
Posts: 11,051
|
|
__________________
|
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#62 |
Observer of Phenomena
Pronouns: he/him Join Date: Feb 2005
Location: Ngunnawal Country
Posts: 70,272
|
|
__________________
Please scream inside your heart. |
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#63 |
Penultimate Amazing
Join Date: Aug 2007
Location: Hong Kong
Posts: 49,601
|
I keep posting as if creating a sentient being is an ethical question that we can debate right now today, regardless of whether the tech to do so is ever possible.
Hell, we now know that Victor Frankenstein could never have succeeded in his work. But that didn't stop Mary Shelley from raising the same valid ethical questions a hundred years before the advent of the electronic computer. If nothing else, ethical questions we answer hypothetically about artificial minds can help us re-examine the ethical framework we have around natural born minds. It can also help us determine what characteristics actually establish a mind with ethical rights in the first place. Is the suffering of computers okay? Then what about the suffering of animals? Is it not okay? Why not? What makes animals different from really complex computers? A soul? Our own sense of empathy? So what's our sense of empathy for, anyway? What avenues of scientific exploration are closed to us, out of a misplaced sense of empathy? Or is it misplaced? Your truism that such AIs are not immanent doesn't really get into any of that. Is it because you're not interested in these questions? Or is it because you don't want me to be interested in these questions? |
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#64 |
Penultimate Amazing
Join Date: Aug 2007
Location: Hong Kong
Posts: 49,601
|
Now we're talking!
So. As a society, we seem to take for granted that it's unethical to just keep dosing someone with drugs until they think and feel the way we want them to think and feel. There are some exceptions, of course. But I don't think intentionally breeding a slave race and drugging them into happy compliance is one of those ethical exceptions. But maybe it does. Brave New World raises some serious questions. How strongly do we really believe that a mass of drug-docile sheep is too high a price to pay for a peaceful and prosperous society? |
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#65 |
Penultimate Amazing
Join Date: Aug 2007
Location: Hong Kong
Posts: 49,601
|
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#66 |
Skeptical about skeptics
Join Date: Sep 2010
Location: 31°57'S 115°57'E
Posts: 16,996
|
I have already answered. If we ever develop sentient machines then we would have the same ethical obligations as we have for other living things.
But we will also have a host of functionally identical machines that are not sentient and thus pose no ethical dilemmas whatsoever. |
__________________
"The process by which banks create money is so simple that the mind is repelled. Where something so important is involved, a deeper mystery seems only decent." - Galbraith, 1975 |
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#67 |
Observer of Phenomena
Pronouns: he/him Join Date: Feb 2005
Location: Ngunnawal Country
Posts: 70,272
|
|
__________________
Please scream inside your heart. |
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#68 |
Penultimate Amazing
Join Date: Mar 2009
Posts: 18,010
|
I probably didn't explain it too well, if it's still unclear. Let's try it in small bites.
- You DON'T "create an unhappy slave" first. Without the dedicated circuitry and/or programming to actually feel anything, it wouldn't feel anything at all. - Neither would you, for that matter. AGAIN, those feelings and urges and needs are not a result of any data processing you do, but a whole different set of chemicals and pathways specifically triggered by those chemicals. - The data processing may or may not inform another set of circuits to release those chemicals, as apropriate. But again, that's extra circuitry you have up there to make you happy or unhappy. - Circuitry which can break even in humans. E.g., THE most common symptom of simple schizophrenia is a flat affect. And even more so in certain kinds of brain damage. You can still be perfectly sentient and pretty much have no emotions, at least as far as anyone can tell. So AGAIN, sentience doesn't produce those emotions, or whatever your fantasy happens to be. - If that robot is going to have any emotions, it's because someone actually programmed a simulation of those pathways. It's not unhappy first and drugged to be happy later. It's gonna feel whatever you programmed it to feel, and nothing else. It's only gonna be unhappy (or bored, or in pain, or anything else) only if you program it to be so. - The notion that you could intend it to be happy, but it would somehow end up miserable, is already telling me that you're not a programmer. It won't be miserable unless you explicitly programmed a way for it to be miserable. That's it. If the piece of code to simulate the pathways for whatever kind of unhappiness you envision isn't actually there, there's no way for it to get triggered. You can't call code that's not even existing. It's like asking, what if you write some forum software, and it starts running a game of Space Invaders instead. Well, it just won't, unless someone actually wrote some Space Invaders code in there. Otherwise no bug can end up calling code that's not even there. - Even if it did somehow, ad-absurdum, end up miserable, all you have to do is debug it, really. Ever had windows ask you to reboot for a patch? Yeah, THAT is how we deal with code that doesn't work quite right. The notion that the choices would only be let it be miserable or kill it, really IS as absurd as monkeys flying out of your butt, and your only choice would be to kill yourself or use them for propulsion. It's what happens when non-programmers go into flights of fantasy about it, based on nothing more than gross ignorance. - Even if you did end up having to turn it off temporarily, it's nothing like killing a human. The whole point of why killing someone is the worst thing ever is that it's irreversible. Turning a computer off to fix some bugs is not. It's in fact more akin to putting you under general anesthetics for an operation. Then you wake up fixed, and that's it. But I mean, fer fork's sake, you don't even need to learn programming to understand that. Even Futurama figured out that an AI is basically immortal as long as there's a backup. Not only it's gonna survive a reboot just fine, but as long as you have some off-site backups, it could literally 'survive' even a literal nuke going off in the server room. You just restore from backup on a new computer, and it's alive again, baby. |
__________________
Which part of "Ph'nglui mglw'nafh Cthulhu R'lyeh wgah'nagl fhtagn" don't you understand? |
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#69 |
Skeptical about skeptics
Join Date: Sep 2010
Location: 31°57'S 115°57'E
Posts: 16,996
|
I don't know what a p-zombie but I appreciate that I can't tell if any other living thing in the world is "sentient" or just programmed to act like it is.
However, I don't think that this is what it is about. I suspect that a "sentient" machine has something that the automatons don't - even if externally they are indistinguishable. |
__________________
"The process by which banks create money is so simple that the mind is repelled. Where something so important is involved, a deeper mystery seems only decent." - Galbraith, 1975 |
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#70 |
Observer of Phenomena
Pronouns: he/him Join Date: Feb 2005
Location: Ngunnawal Country
Posts: 70,272
|
FYI (congrats on being one of today's 10,000) a p-zombie is a machine that is not sentient, but is programmed to behave in all ways as though it is. If asked "are you sentient?" it is programmed to reply "yes, I am sentient". It experiences no emotions, though it is programmed to behave in all ways as though it does. If asked "why do you cry when I call you non-sentient" it is programmed to reply "because you are hurting my feelings."
Going down this road opens a huge can of worms, though, hence the heads up. What is it that you suspect sentient machines have that non-sentient machines do not? |
__________________
Please scream inside your heart. |
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#71 |
Skeptical about skeptics
Join Date: Sep 2010
Location: 31°57'S 115°57'E
Posts: 16,996
|
|
__________________
"The process by which banks create money is so simple that the mind is repelled. Where something so important is involved, a deeper mystery seems only decent." - Galbraith, 1975 |
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#72 |
Observer of Phenomena
Pronouns: he/him Join Date: Feb 2005
Location: Ngunnawal Country
Posts: 70,272
|
|
__________________
Please scream inside your heart. |
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#73 |
Schrödinger's cat
Join Date: May 2004
Location: Malmesbury, UK
Posts: 12,998
|
It's a soul, isn't it?
|
__________________
"If you trust in yourself ... and believe in your dreams ... and follow your star ... you'll still get beaten by people who spent their time working hard and learning things" - Terry Pratchett |
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#74 |
Maledictorian
Join Date: Aug 2016
Posts: 14,653
|
As far as I am concerned, you are all p-Zombies.
As far as you are concerned, I am one. |
__________________
So what are you going to do about it, huh? What would an intellectual do? What would Plato do? |
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#75 |
Skeptical about skeptics
Join Date: Sep 2010
Location: 31°57'S 115°57'E
Posts: 16,996
|
|
__________________
"The process by which banks create money is so simple that the mind is repelled. Where something so important is involved, a deeper mystery seems only decent." - Galbraith, 1975 |
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#76 |
Maledictorian
Join Date: Aug 2016
Posts: 14,653
|
I wouldn't be so sure.
There is a moral argument to be had about making and using child-like sex dolls. There are all kinds of moral questions about how people should and should not treat non-sentient entities. It's not just about the one acted upon, but also about the one doing the acting on. |
__________________
So what are you going to do about it, huh? What would an intellectual do? What would Plato do? |
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#77 |
Skeptical about skeptics
Join Date: Sep 2010
Location: 31°57'S 115°57'E
Posts: 16,996
|
|
__________________
"The process by which banks create money is so simple that the mind is repelled. Where something so important is involved, a deeper mystery seems only decent." - Galbraith, 1975 |
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#78 |
Maledictorian
Join Date: Aug 2016
Posts: 14,653
|
|
__________________
So what are you going to do about it, huh? What would an intellectual do? What would Plato do? |
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#79 |
Skeptical about skeptics
Join Date: Sep 2010
Location: 31°57'S 115°57'E
Posts: 16,996
|
|
__________________
"The process by which banks create money is so simple that the mind is repelled. Where something so important is involved, a deeper mystery seems only decent." - Galbraith, 1975 |
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#80 |
Lackey
Administrator
Join Date: Aug 2001
Location: South East, UK
Posts: 96,954
|
Many people claim that they “love their job” or are “happy” at their job, self help books often tell people to learn to be happy with their life. If it isn’t a bad thing to believe that how is it a bad thing to make everyone share the same happiness?
Now with humans we know at the moment we can’t do that, most of the drugs we use to alter moods do it via a “high” that interferes with everyday life. But with a mind we create I would say we have an ethical obligation to ensure it is happy with its existence, happy doing what we want it to do, so we should look to design the mind in such a way that enables us to do that. It’s the opposite to an Asimov three laws of robotics approach which is the stick technique, ensuring the mind is happy is the carrot approach, which is what we seem to (as a generalisation of an abstract level of society) regard as the favoured approach. |
__________________
I wish I knew how to quit you |
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
Bookmarks |
Thread Tools | |
|
|