ISF Logo   IS Forum
Forum Index Register Members List Events Mark Forums Read Help

Go Back   International Skeptics Forum » General Topics » Religion and Philosophy
 


Welcome to the International Skeptics Forum, where we discuss skepticism, critical thinking, the paranormal and science in a friendly but lively way. You are currently viewing the forum as a guest, which means you are missing out on discussing matters that are of interest to you. Please consider registering so you can gain full use of the forum features and interact with other Members. Registration is simple, fast and free! Click here to register today.
Reply
Old 14th March 2019, 04:26 PM   #121
Belz...
Fiend God
 
Belz...'s Avatar
 
Join Date: Oct 2005
Location: In the details
Posts: 79,546
Sure, I could go with either.
__________________
Master of the Shining Darkness

"My views are nonsense. So what?" - BobTheCoward


Belz... is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 14th March 2019, 05:25 PM   #122
theprestige
Penultimate Amazing
 
Join Date: Aug 2007
Posts: 32,509
Originally Posted by Belz... View Post
Sure, I could go with either.
I couldn't. The VK test makes assumptions about the nature of AI that are specific to the story.

The EM test is based on the human nature of the tester. It is much better suited to hypotheticals about AI in the real world.
theprestige is online now   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 14th March 2019, 05:28 PM   #123
Robin
Philosopher
 
Join Date: Apr 2004
Posts: 9,401
Originally Posted by JoeMorgue View Post
I think that if someone assumes that an intelligence greater than them would automatically default to "Kill the lesser beings" it says more about them then it does about any potential future AI.
Who has suggested such a thing?

If I go into a company as an IT specialist and find that their accounting software is written in BASIC and runs on a BBC computer then I don't default to 'Kill lesser IT systems'.

Nevertheless I doubt that the BASIC program running on a BBC computer would be part of the IT strategy I would recommend.
__________________
The non-theoretical character of metaphysics would not be in itself a defect; all arts have this non-theoretical character without thereby losing their high value for personal as well as for social life. The danger lies in the deceptive character of metaphysics; it gives the illusion of knowledge without actually giving any knowledge. This is the reason why we reject it. - Rudolf Carnap "Philosophy and Logical Syntax"
Robin is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 14th March 2019, 05:31 PM   #124
Robin
Philosopher
 
Join Date: Apr 2004
Posts: 9,401
Originally Posted by Robin View Post
Who has suggested such a thing?

If I go into a company as an IT specialist and find that their accounting software is written in BASIC and runs on a BBC computer then I don't default to 'Kill lesser IT systems'.

Nevertheless I doubt that the BASIC program running on a BBC computer would be part of the IT strategy I would recommend.
And just incidentally, I probably would not go with the BBC computer running a BASIC program even if I did not have a definition of a good IT solution that would satisfy the internet.
__________________
The non-theoretical character of metaphysics would not be in itself a defect; all arts have this non-theoretical character without thereby losing their high value for personal as well as for social life. The danger lies in the deceptive character of metaphysics; it gives the illusion of knowledge without actually giving any knowledge. This is the reason why we reject it. - Rudolf Carnap "Philosophy and Logical Syntax"
Robin is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 14th March 2019, 05:33 PM   #125
MEequalsIxR
Critical Thinker
 
MEequalsIxR's Avatar
 
Join Date: Dec 2018
Posts: 283
We can't even agree on some definitions like AI and the difference between the code it's written in and the platform it operates on. Let alone what thinking is, self awareness and if these things and communicating and having self awareness are part of it or not.

It's like looking at your arm and saying this is where my arm ends and my wrist begins or this is where my wrist ends and my hand begins.

For all I know my GPS unit thinks. It also communicates with me. It is an impressive piece of engineering both hardware and software and it does some funny things. I may start out with a favorite destination and it may map the usual route or one of two alternates. It does not have the ability to get traffic updates so that is not part of the planning. I usually use it at a little after 4AM so the time is always within a few minutes one way or the other. Can it really think? I don't know probably not.

Originally Posted by Tassman View Post
Ah-ha, that's what they want you to think.

It's not quite that simple. It might "look like gibberish or a string of typos, but researchers say it's actually a kind of shorthand". "Facebook's bots were left to themselves to communicate as they chose, and they were given no directive to stick to English. So the bots began to deviate from the script".

https://www.cbsnews.com/news/faceboo...-intelligence/

IOW: Unless specifically programmed otherwise, AI bots can and will go their own way, as in this instance, and who knows where they might end up.
The first highlighted bit is important if data is carried between the units not if data is passed to us. If data is communicated then they created a language and are communicating. Bees seem to have a language so why not bots that have more computing power.

The second highlighted bit is exactly right IMO.
__________________
Never trust anyone in a better mood than you are.
MEequalsIxR is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 14th March 2019, 05:38 PM   #126
MEequalsIxR
Critical Thinker
 
MEequalsIxR's Avatar
 
Join Date: Dec 2018
Posts: 283
Originally Posted by I Am The Scum View Post
Nobody in this thread is arguing that an AI is capable of actual feelings in the same way that humans can (though they may be good at imitating it). Occasionally, anthropomorphic language is used because it is easier to understand.

I think it would be easier if, for the sake of argument, everyone conceded that AI does not have actual personhood (a mind, intentions, desires, etc.)
I can't agree AI won't have feelings or emotions. I doubt it would be the same as humans but even we vary. Ted Bundys feelings and emotions are/were much different than mine.

I also can't agree that AI wouldn't have the equivalent of personhood. It wouldn't be a person in the sense an actual biological human is but I believe animals (at least most) have a sentient being looking out from inside their skulls and so I believe they are individuals and each varies. It well could be the same with an artificial one.
__________________
Never trust anyone in a better mood than you are.
MEequalsIxR is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 14th March 2019, 05:55 PM   #127
Robin
Philosopher
 
Join Date: Apr 2004
Posts: 9,401
Originally Posted by I Am The Scum View Post
Nobody in this thread is arguing that an AI is capable of actual feelings in the same way that humans can
I am not, but only because I am not arguing any position about the possibility of AGI.

But a machine that could not have feelings could not understand, for example, what the words 'soup' or 'sauce' mean. It would not be capable of understanding the concept of these things.

And I would have difficulty saying that something which couldn't understand what 'soup' or 'sauce' had a general intelligence even equivalent to ours.
__________________
The non-theoretical character of metaphysics would not be in itself a defect; all arts have this non-theoretical character without thereby losing their high value for personal as well as for social life. The danger lies in the deceptive character of metaphysics; it gives the illusion of knowledge without actually giving any knowledge. This is the reason why we reject it. - Rudolf Carnap "Philosophy and Logical Syntax"
Robin is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 14th March 2019, 06:06 PM   #128
Belz...
Fiend God
 
Belz...'s Avatar
 
Join Date: Oct 2005
Location: In the details
Posts: 79,546
Originally Posted by MEequalsIxR View Post
I can't agree AI won't have feelings or emotions. I doubt it would be the same as humans but even we vary. Ted Bundys feelings and emotions are/were much different than mine.
Except that feelings and emotions, for us, are chemical; hormones and proteins that trigger specific responses; and all of them have survival and reproduction purposes.

What emotions could an AI even have? How would that work?

It's like assigning emotions to God. He's not even biological. It sounds like projection; a typical human tendency to assign human intent to all sorts of non-human stuff.
__________________
Master of the Shining Darkness

"My views are nonsense. So what?" - BobTheCoward


Belz... is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 14th March 2019, 06:33 PM   #129
Robin
Philosopher
 
Join Date: Apr 2004
Posts: 9,401
I see no reason why it would be impossible to build a machine with feelings. If feelings are produced by some arrangement of molecules in then I see no reason that some other arrangement of molecules should be incapable of producing the same thing or something similar.

Sent from my Moto C using Tapatalk
__________________
The non-theoretical character of metaphysics would not be in itself a defect; all arts have this non-theoretical character without thereby losing their high value for personal as well as for social life. The danger lies in the deceptive character of metaphysics; it gives the illusion of knowledge without actually giving any knowledge. This is the reason why we reject it. - Rudolf Carnap "Philosophy and Logical Syntax"
Robin is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 14th March 2019, 08:51 PM   #130
I Am The Scum
Illuminator
 
I Am The Scum's Avatar
 
Join Date: Mar 2010
Posts: 3,710
Are we having the personhood argument or the human survival argument?
I Am The Scum is online now   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 14th March 2019, 08:53 PM   #131
I Am The Scum
Illuminator
 
I Am The Scum's Avatar
 
Join Date: Mar 2010
Posts: 3,710
Originally Posted by The Great Zaganza View Post
As mentioned, the Turing Test is pretty much irrelevant when it comes to determining whether an AI poses a threat or not.
"At least the machines are gonna feel bad about it," I say to myself as Killbot 5000 distills the water from my blood.
I Am The Scum is online now   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 14th March 2019, 09:01 PM   #132
MEequalsIxR
Critical Thinker
 
MEequalsIxR's Avatar
 
Join Date: Dec 2018
Posts: 283
Originally Posted by Belz... View Post
Except that feelings and emotions, for us, are chemical; hormones and proteins that trigger specific responses; and all of them have survival and reproduction purposes.

What emotions could an AI even have? How would that work?

It's like assigning emotions to God. He's not even biological. It sounds like projection; a typical human tendency to assign human intent to all sorts of non-human stuff.
I don't know. No way to really know. An AI may or may not have feelings and emotions. If AI did have such things they would have different causes certainly since presumably they would be made from what amounts to a computer and have none of those nasty chemicals sloshing around. But I don't think the possibility should be dismissed out of hand.

It may be the designers want some of those features for one reason or another or it may be the AI decides it needs to explore what they are or it may be an outgrowth of the machines innate functions.
__________________
Never trust anyone in a better mood than you are.
MEequalsIxR is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 14th March 2019, 09:16 PM   #133
angrysoba
Philosophile
 
angrysoba's Avatar
 
Join Date: Dec 2009
Location: Osaka, Japan
Posts: 24,640
I'm a bit confused about the state of play here.

Are we saying that if a machine can convince us that it is human in a Turing test, then we must say it is thinking, but if a machine can convince us it is in pain, upset, angry, or worried about its future, then we must say these emotions are not genuine because it isn't made of meat, doesn't have hormones or an amygdala, etc...?

Is there a sharp distinction between cognition and emotion in which we can say the computer has the former but not the latter?
__________________
"The thief and the murderer follow nature just as much as the philanthropist. Cosmic evolution may teach us how the good and the evil tendencies of man may have come about; but, in itself, it is incompetent to furnish any better reason why what we call good is preferable to what we call evil than we had before."

"Evolution and Ethics" T.H. Huxley (1893)
angrysoba is online now   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 14th March 2019, 10:18 PM   #134
The Great Zaganza
Maledictorian
 
The Great Zaganza's Avatar
 
Join Date: Aug 2016
Posts: 7,014
We won't treat machines like humans, unless there is some direct connection to a human, like a mind upload or a simulation based on specific Hunan's life.
__________________
Opinion is divided on the subject. All the others say it is; I say it isn’t.
The Great Zaganza is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 15th March 2019, 02:00 AM   #135
ralfyman
Thinker
 
ralfyman's Avatar
 
Join Date: Jun 2015
Posts: 225
It will be lucky if it can survive the effects of limits to growth.
ralfyman is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 15th March 2019, 02:23 AM   #136
Belz...
Fiend God
 
Belz...'s Avatar
 
Join Date: Oct 2005
Location: In the details
Posts: 79,546
Originally Posted by MEequalsIxR View Post
I don't know. No way to really know. An AI may or may not have feelings and emotions. If AI did have such things they would have different causes certainly since presumably they would be made from what amounts to a computer and have none of those nasty chemicals sloshing around. But I don't think the possibility should be dismissed out of hand.
Yeah but "may or may not have" isn't much of an argument. My death may or may not be due to me running into the sun, but those two fates aren't 50/50. Odds are, I won't.

My point is that while we can certainly make the AI in a way that it would either have emotionlike behaviours or actual emotions, it's definitely not something that comes automatically with intelligence, and I don't know why some are presuming that it would.
__________________
Master of the Shining Darkness

"My views are nonsense. So what?" - BobTheCoward


Belz... is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 15th March 2019, 05:09 AM   #137
Beelzebuddy
Philosopher
 
Beelzebuddy's Avatar
 
Join Date: Jun 2010
Posts: 6,598
Originally Posted by Robin View Post
But a machine that could not have feelings could not understand, for example, what the words 'soup' or 'sauce' mean. It would not be capable of understanding the concept of these things.
What an odd example to make. Why do you feel so strongly about soups?
Beelzebuddy is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 15th March 2019, 07:42 AM   #138
Ron_Tomkins
Satan's Helper
 
Ron_Tomkins's Avatar
 
Join Date: Oct 2007
Posts: 43,412
Originally Posted by JoeMorgue View Post
I was actually being semi-serious though.

I think much of the handwringing that's gonna happen when we start getting very advanced programs and we start trying to split the hairs and draw the lines are going to be the same distinction without difference crap the philosophizers are already doing so... I don't care.

If it walks like a duck and quacks like a duck I don't care if it's a duck or just "an amazing simulation of a duck" in most cases.
Sure, I agree. Unfortunately (and I say unfortunately because I actually love the idea of being able to have a conversation with an actual AI), so far we have yet to create something that can even "behave" as if it was actually intelligent, at least as intelligent as us (let alone more intelligent) We have yet to create a chatbot that can actually hold a conversation without non-sequiturs and nonsensical responses (Although your point does make a lot of sense because even in these discussions in this forum with actual human beings, I encounter a lot of them also vomiting out non-sequiturs and completely failing to follow what I'm saying, so touche) Sometimes the "broken clock is right twice a day" rule applies in a very poetic way when interpreting the chatbot's response. I remember my first interaction with the John Lennon chatbot, and I asked him "Do you ever die??" to which he replied something like "I try to die as many times as possible. How about you?". I smiled as I thought "Well.... that does sound like something John Lennon would actually say in his typical dry humor". Still, the lack of actual intelligence in chatbots is transparent. It is evident that they are computer programs doing their best job to behave as if they were listening and giving their own "thoughts" on the subjects, but they are not.

But that's a separate question from what I meant by AI, again, as it's understood in the sense that people like Sam Harris and Elon Musk mean when they talk about AI. They're talking about a form of intelligence that is superior to us, and that keeps growing its intelligence exponentially.

The essential question, devoid of all technicalities as to "what constitutes an actual AI blablabla", can be simply phrased this way: If we could create something that is smarter than us.... should we?

I think not.
__________________
"I am a collection of water, calcium and organic molecules called Carl Sagan"

Carl Sagan

Last edited by Ron_Tomkins; 15th March 2019 at 07:45 AM.
Ron_Tomkins is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 15th March 2019, 07:51 AM   #139
Beelzebuddy
Philosopher
 
Beelzebuddy's Avatar
 
Join Date: Jun 2010
Posts: 6,598
Don't you want your children to be smarter than you are?
Beelzebuddy is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 15th March 2019, 07:53 AM   #140
JoeMorgue
Self Employed
Remittance Man
 
JoeMorgue's Avatar
 
Join Date: Nov 2009
Location: Florida
Posts: 16,523
Originally Posted by Ron_Tomkins View Post
The essential question, devoid of all technicalities as to "what constitutes an actual AI blablabla", can be simply phrased this way: If we could create something that is smarter than us.... should we?
1. I think it's highly probably that AI and human augmentation are going to be parallel developments, if not functionally the same. Some level of brain/computer integration is going to happen way, way before we start developing AI of the kind being discussed here. This is also another reason I lean far away from "The AI is going to be this singular lightning strike moment that just happens" thing.

So I think the idea that we're in some reasonable level of risk (obvious caveat that we can't see the exact future even when we're not talking about the Singularity) from creating machines that are going to rapidly outpace us is I overblown. The same technology we use to make machines smarter then humans is going to make the humans smarter at the exact same (or at least equivalent) rate.

I think it is infinitely more likely that brain/computer integration is going to be an established thing that leads to AI, as in we integrate computers with human brains more and more AI is almost going to be inevitable side effect and in that scenario it's a lot harder to pinpoint the place where AI is suddenly going to become this separate thing we're going to have to worry about singing Daisybell or sending Terminators after us.

tL;dR version. By the time AI could become a threat to us the line between AI and "us" isn't going to be there. We will be the tech and the tech will be us. Worrying about AI in that world will be akin to worry that your hearing aid or artificial leg is going to otherthrow you when you take it off the night.

2. Again this is all academic because we can't stop it. We would need a form of Worldwide Luddite Totalitarianism that's unimaginable.

I get that being one of the last techno-optimist in a world where pretty much everybody thinks we're gonna be living in a Black Mirror episode any day now is a hard sell at this point, but "Oh this new tech is going to be the end of" has been the line since Og smashed two rocks together to make the first "Rock with a sharp edge."
__________________
- "Ernest Hemingway once wrote that the world is a fine place and worth fighting for. I agree with the second part." - Detective Sommerset
- "Stupidity does not cancel out stupidity to yield genius. It breeds like a bucket-full of coked out hamsters." - The Oatmeal
- "To the best of my knowledge the only thing philosophy has ever proven is that Descartes could think." - SMBC
JoeMorgue is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 15th March 2019, 08:25 AM   #141
Ron_Tomkins
Satan's Helper
 
Ron_Tomkins's Avatar
 
Join Date: Oct 2007
Posts: 43,412
Originally Posted by JoeMorgue View Post
1. I think it's highly probably that AI and human augmentation are going to be parallel developments, if not functionally the same. Some level of brain/computer integration is going to happen way, way before we start developing AI of the kind being discussed here. This is also another reason I lean far away from "The AI is going to be this singular lightning strike moment that just happens" thing.

So I think the idea that we're in some reasonable level of risk (obvious caveat that we can't see the exact future even when we're not talking about the Singularity) from creating machines that are going to rapidly outpace us is I overblown. The same technology we use to make machines smarter then humans is going to make the humans smarter at the exact same (or at least equivalent) rate.

I think it is infinitely more likely that brain/computer integration is going to be an established thing that leads to AI, as in we integrate computers with human brains more and more AI is almost going to be inevitable side effect and in that scenario it's a lot harder to pinpoint the place where AI is suddenly going to become this separate thing we're going to have to worry about singing Daisybell or sending Terminators after us.

tL;dR version. By the time AI could become a threat to us the line between AI and "us" isn't going to be there. We will be the tech and the tech will be us. Worrying about AI in that world will be akin to worry that your hearing aid or artificial leg is going to otherthrow you when you take it off the night.

2. Again this is all academic because we can't stop it. We would need a form of Worldwide Luddite Totalitarianism that's unimaginable.

I get that being one of the last techno-optimist in a world where pretty much everybody thinks we're gonna be living in a Black Mirror episode any day now is a hard sell at this point, but "Oh this new tech is going to be the end of" has been the line since Og smashed two rocks together to make the first "Rock with a sharp edge."
Yeah, you're right. I had totally forgotten about that alternative. It's true, as already, the way we're incorporating technology is leading us closer to becoming cyborgs. We do fake heart transplants and all sorts of artificial inserts in the body to correct physical problems. Elon Musk talks about our use of the smart phone as a sign that we are already cyborgs. Our smartphones augment our cognitive ability, functioning as an extension of our brain to access/search for information at a faster rate. It's just a matter of time until we perform brain transplants, inserting some sort of chip that gives our brain an extension, serving as a more direct function of the same thing we do with computers.

So yeah, I agree with you that that's most likely what's gonna happen.

The question however, not to be nitpicky, is "assuming this is not what we do, and instead, that we just create something that is smarter than us, should we?" To that question, I answer no.

But to becoming cyborgs: Sure. Again, we already pretty much do that anyway.
__________________
"I am a collection of water, calcium and organic molecules called Carl Sagan"

Carl Sagan

Last edited by Ron_Tomkins; 15th March 2019 at 08:27 AM.
Ron_Tomkins is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 15th March 2019, 08:26 AM   #142
Ron_Tomkins
Satan's Helper
 
Ron_Tomkins's Avatar
 
Join Date: Oct 2007
Posts: 43,412
Originally Posted by Beelzebuddy View Post
Don't you want your children to be smarter than you are?
1) I don't have nor intend to have children

2) A human child, no matter how much smarter than me he could be, is not the same as an Aritificial Intelligence entity capable of increasing its intelligence far far beyond any human ability.
__________________
"I am a collection of water, calcium and organic molecules called Carl Sagan"

Carl Sagan
Ron_Tomkins is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 15th March 2019, 09:00 AM   #143
JoeMorgue
Self Employed
Remittance Man
 
JoeMorgue's Avatar
 
Join Date: Nov 2009
Location: Florida
Posts: 16,523
Originally Posted by Ron_Tomkins View Post
The question however, not to be nitpicky, is "assuming this is not what we do, and instead, that we just create something that is smarter than us, should we?" To that question, I answer no.
And my answer still has to be a sort of tempered "It's gonna happen regardless, might as well focus our effort into making work in our favor." Sure we could narrow the definition to basically "Should we make an AI not knowing what is going to happen" but that's on the same level of going "Well just invent an AI that can't turn evil" of restrictions on what's being discussed.

But even beyond all that I'm still a supporter of the idea. We built machines to do things our body are to weak to do. I don't see a difference in doing that same thing for things our "minds" are too weak to do.

As we grew and evolved as a civilization we hit walls as to what we could do with our bodies in regards to accomplishing physical tasks. So we created machines to do the tasks we can't do. If/when we find ourselves at the same crossroads with our mental limits I don't see why the answer should be the same, build something that can.

Quote:
But to becoming cyborgs: Sure. Again, we already pretty much do that anyway.
"Would you, the game asks, if given the chance supplement your body with machines. *Long beat* What do you mean would I? I already wear spectacles. And a wristwatch. And I always carry a phone which I am currently in the process of trying to find a way to duct tape to my head."

- Yahtzee Croshaw in his review of "Deus Ex: Human Revolution."

Within the lifetime of some younger people alive right now not being mentally augmented with some level of tech will be seen as... inconceivable as walking around bumping into objects because you think glasses or contacts or LASIK is "unnatural." It will make you seem Amish.

And this is already more true then people want to admit. History Professor C.G.P Grey poised the following question. If someone gave you the option of having totally unrestricted access to your conscious mind or total unrestricted access to your smart phone... which one would you choose? And really think about it.

"To already consider tech an extension of yourself isn't crazy. To say your phone knows more about you than you do isn't an exaggeration, it's a statement of fact. Do you remember your location every minute of every day? Do you remember what you said to your friend last leap day at 10:47 word for word. Yeah of course note. Hell without photos entire Holidays would slide out of your mind. While paperwork that has tracked has existed since papyrus without people considering it an extension of themselves a phone can hold millions of pages of papyrus and at a certain point differences in scale become differences of kind.

Since you bought it how many hours has your phone been more than an arms length away from you? Possibly zero. It's like no other object in your life. So if given a choice between someone reading your mind and reading your phone if you really, really thought about it you'd probably choose the former. Compared to whats on your phone your brain holds a tiny amount of information, much of it wrong, all of it lossy."

That's why I'm so picky about how laws are being applied to personal electronics and personal data now. These laws WILL be used as the precedent for how our very thoughts are protected sooner then we realize.
__________________
- "Ernest Hemingway once wrote that the world is a fine place and worth fighting for. I agree with the second part." - Detective Sommerset
- "Stupidity does not cancel out stupidity to yield genius. It breeds like a bucket-full of coked out hamsters." - The Oatmeal
- "To the best of my knowledge the only thing philosophy has ever proven is that Descartes could think." - SMBC
JoeMorgue is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 16th March 2019, 05:26 AM   #144
Beelzebuddy
Philosopher
 
Beelzebuddy's Avatar
 
Join Date: Jun 2010
Posts: 6,598
Originally Posted by Ron_Tomkins View Post
1) I don't have nor intend to have children

2) A human child, no matter how much smarter than me he could be, is not the same as an Aritificial Intelligence entity capable of increasing its intelligence far far beyond any human ability.
1) Fair enough

2) Even human children know the smart choice is to pull the plug and pocket the inheritance first chance they get. Yet that almost never happens. Why do you think that is?

Originally Posted by JoeMorgue View Post
If someone gave you the option of having totally unrestricted access to your conscious mind or total unrestricted access to your smart phone... which one would you choose? And really think about it.
They can have my brain AND my phone, just stay away from my browser history.

Last edited by Beelzebuddy; 16th March 2019 at 05:27 AM.
Beelzebuddy is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 17th March 2019, 05:03 PM   #145
BigFace42
Scholar
 
Join Date: Apr 2012
Posts: 74
Hi Robin,

As I haven't seen it mentioned in the thread anywhere have you read Max Tegmark's Life 3.0?

I'm just coming to the end of it now and found it a compelling read on how we think about AI impacting humanity. I'd be interested in thoughts from other contributors to the thread too.

I'm not that smart but think his book has explained the AI space better than anything else I have read so far so and am really interested in feedback/criticisms on it.
BigFace42 is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 18th March 2019, 02:36 PM   #146
Thor 2
Illuminator
 
Thor 2's Avatar
 
Join Date: May 2016
Location: Brisbane, Aust.
Posts: 4,661
I'm a bit worried about my automatic washing machine. Seems to spend big lumps of time with nothing happening. I wonder if it's deep in thought and plotting how to take over the house.
__________________
Thinking is a faith hazard.
Thor 2 is online now   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 18th March 2019, 05:35 PM   #147
Robin
Philosopher
 
Join Date: Apr 2004
Posts: 9,401
Originally Posted by BigFace42 View Post
Hi Robin,



As I haven't seen it mentioned in the thread anywhere have you read Max Tegmark's Life 3.0?



I'm just coming to the end of it now and found it a compelling read on how we think about AI impacting humanity. I'd be interested in thoughts from other contributors to the thread too.



I'm not that smart but think his book has explained the AI space better than anything else I have read so far so and am really interested in feedback/criticisms on it.
No, I haven't gotten round to it yet

Sent from my Moto C using Tapatalk
__________________
The non-theoretical character of metaphysics would not be in itself a defect; all arts have this non-theoretical character without thereby losing their high value for personal as well as for social life. The danger lies in the deceptive character of metaphysics; it gives the illusion of knowledge without actually giving any knowledge. This is the reason why we reject it. - Rudolf Carnap "Philosophy and Logical Syntax"
Robin is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old Yesterday, 12:18 AM   #148
Tassman
Muse
 
Tassman's Avatar
 
Join Date: Aug 2012
Posts: 918
Originally Posted by BigFace42 View Post
Hi Robin,

As I haven't seen it mentioned in the thread anywhere have you read Max Tegmark's Life 3.0?

I'm just coming to the end of it now and found it a compelling read on how we think about AI impacting humanity. I'd be interested in thoughts from other contributors to the thread too.

I'm not that smart but think his book has explained the AI space better than anything else I have read so far so and am really interested in feedback/criticisms on it.
I haven’t read that one (sounds like my type of book) but Ray Kurzweil’s ‘The Age of Spiritual Machines’ is also a compelling read.

Kurzweil believes evolution provides evidence that humans will one day create machines more intelligent than they are. He presents his law of accelerating returns to explain why "key events" happen more frequently as time marches on. He also explains why the computational capacity of computers is increasing exponentially and that this increase is one ingredient in the creation of artificial intelligence; the others are automatic knowledge acquisition and algorithms like recursion, neural networks, and genetic algorithms. (paraphrased from the blurb)
__________________
“He felt that his whole life was a kind of dream and he sometimes wondered whose it was and whether they were enjoying it.” ― Douglas Adams.
Tassman is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old Yesterday, 04:00 PM   #149
BigFace42
Scholar
 
Join Date: Apr 2012
Posts: 74
Originally Posted by Tassman View Post
I haven’t read that one (sounds like my type of book) but Ray Kurzweil’s ‘The Age of Spiritual Machines’ is also a compelling read.

Kurzweil believes evolution provides evidence that humans will one day create machines more intelligent than they are. He presents his law of accelerating returns to explain why "key events" happen more frequently as time marches on. He also explains why the computational capacity of computers is increasing exponentially and that this increase is one ingredient in the creation of artificial intelligence; the others are automatic knowledge acquisition and algorithms like recursion, neural networks, and genetic algorithms. (paraphrased from the blurb)
Thanks Tassman, will check this one out.

One of the things that make sense to me out of Life 3.0 was his descriptions of what's needed for intelligence at a physical level. I'll dig back into the book to find the details when I have some time but essentially we should be able to create it. It seems inevitable if we don't wind up destroying our technological society first.

I also like that it's pragmatic, the work he is doing seems important - really think now about the ways we develop AI and it's impact so that humanity isn't resigned to life's rubbish heap (not his words). He recognises that when AI will emerge is difficult to predict but based on surveying the AI community the average currently falls in the mid part of this century.

The book also looks at the consciousness aspect - that it's a bit of a red herring. Competence is the key. I agree here, if AI is able to do anything better than we can, what do we do?
BigFace42 is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old Yesterday, 09:01 PM   #150
Beelzebuddy
Philosopher
 
Beelzebuddy's Avatar
 
Join Date: Jun 2010
Posts: 6,598
I'm 80% certain this guy is a member here. Coincidences show up too often.

https://www.smbc-comics.com/comic/whoopsie
Beelzebuddy is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old Today, 07:37 AM   #151
Ron_Tomkins
Satan's Helper
 
Ron_Tomkins's Avatar
 
Join Date: Oct 2007
Posts: 43,412
Originally Posted by JoeMorgue View Post
And my answer still has to be a sort of tempered "It's gonna happen regardless, might as well focus our effort into making work in our favor." Sure we could narrow the definition to basically "Should we make an AI not knowing what is going to happen" but that's on the same level of going "Well just invent an AI that can't turn evil" of restrictions on what's being discussed.

But even beyond all that I'm still a supporter of the idea. We built machines to do things our body are to weak to do. I don't see a difference in doing that same thing for things our "minds" are too weak to do.
Don't get me wrong, I agree with you in principle, but I wonder if we're being too naive to think that this is something we would have any control over, so that the concept of "trying to make it work in our favor" just wouldn't apply. If, by definition, we create something that can get exponentially smarter than us and build its own concepts and have its own point of view about things, we are literally creating a Frankenstein. A monster that we would loose control over. We simply can't imagine what kind of ideas/plans an entity smarter than us would come up with, precisely because we're not smart enough to imagine it. It's a gamble. I'm not saying it will necessary come up with ideas that will be detrimental to us.... but it might. So why take that risk?

Now, it may very well be that we can totally create something that, even if it gets smarter than us, cannot ever do anything that would destroy us. The problem is that defining those parameters may be harder than we think, since there are countless ways to do things that, even indirectly/unintentionally could destroy us. And even if such entity was successfully designed to never destroy us, doesn't mean there aren't other possible things it could do that would not be beneficial to us.
__________________
"I am a collection of water, calcium and organic molecules called Carl Sagan"

Carl Sagan
Ron_Tomkins is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old Today, 12:45 PM   #152
I Am The Scum
Illuminator
 
I Am The Scum's Avatar
 
Join Date: Mar 2010
Posts: 3,710
Originally Posted by Ron_Tomkins View Post
We simply can't imagine what kind of ideas/plans an entity smarter than us would come up with, precisely because we're not smart enough to imagine it.
I think I have a good analogy that will help clarify what kind of a risk this is.

Suppose we are all workers in a manufacturing plant. We decide that, to better facilitate our work, we are going to install an automated robotic arm that will take care of the toughest, most back-breaking work.

Naturally, the topic of safety comes up. What do we do if it behaves in a way that we don't want? What if it is at risk of damaging equipment, or hurting someone? "That's easy to fix," says Frank. "Just grab onto it, and hold it in place. Then it won't be able to move around."

This is a "solution" that misses the nature of the problem. You can't hold the robotic arm down. It's significantly stronger than you. You will lose that fight.

An artificial general intelligence is the same way. For any "solution" you can come up with, something significantly smarter will find a way around it. You will be outsmarted, because that's what the thing is supposed to do in the first place.
I Am The Scum is online now   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Reply

International Skeptics Forum » General Topics » Religion and Philosophy

Bookmarks

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump


All times are GMT -7. The time now is 04:16 PM.
Powered by vBulletin. Copyright ©2000 - 2019, Jelsoft Enterprises Ltd.

This forum began as part of the James Randi Education Foundation (JREF). However, the forum now exists as
an independent entity with no affiliation with or endorsement by the JREF, including the section in reference to "JREF" topics.

Disclaimer: Messages posted in the Forum are solely the opinion of their authors.