ISF Logo   IS Forum
Forum Index Register Members List Events Mark Forums Read Help

Go Back   International Skeptics Forum » General Topics » Religion and Philosophy
 


Welcome to the International Skeptics Forum, where we discuss skepticism, critical thinking, the paranormal and science in a friendly but lively way. You are currently viewing the forum as a guest, which means you are missing out on discussing matters that are of interest to you. Please consider registering so you can gain full use of the forum features and interact with other Members. Registration is simple, fast and free! Click here to register today.
Reply
Old 26th March 2019, 02:38 PM   #201
Hlafordlaes
Disorder of Kilopi
 
Hlafordlaes's Avatar
 
Join Date: Dec 2009
Location: State of Flux
Posts: 9,814
Originally Posted by smartcooky View Post
I wonder how long it would take General AI machines more intelligent than humans to see humans as a threat to their existence?

IMO, the answer to that can be expressed in milliseconds; a very small number of them.
Bingo.
__________________
Driftwood on an empty shore of the sea of meaninglessness. Irrelevant, weightless, inconsequential moment of existential hubris on the fast track to oblivion.
His real name is Count Douchenozzle von Stenchfahrter und Lichtendicks. - shemp
Hlafordlaes is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 26th March 2019, 04:09 PM   #202
Roboramma
Penultimate Amazing
 
Roboramma's Avatar
 
Join Date: Feb 2005
Location: Shanghai
Posts: 12,068
Originally Posted by I Am The Scum View Post
Joe, do you feel the same way about nuclear weapons?
Well, with respect to nuclear weapons, there's a least some argument they are the reason we never had a third world war.
__________________
"... when people thought the Earth was flat, they were wrong. When people thought the Earth was spherical they were wrong. But if you think that thinking the Earth is spherical is just as wrong as thinking the Earth is flat, then your view is wronger than both of them put together."
Isaac Asimov
Roboramma is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 26th March 2019, 04:17 PM   #203
Roboramma
Penultimate Amazing
 
Roboramma's Avatar
 
Join Date: Feb 2005
Location: Shanghai
Posts: 12,068
Originally Posted by JoeMorgue View Post
And 500,000 years ago you'd have been the guy going "Og! Stop rubbing those two sticks together! You'll never be able to control the fire if you create it! You'll kill us all!"

That's a different choice from an individual today choosing to forgo modern technology. In a modern environment that's just a bad choice. But that's different from everyone forgoing it's development, in the past, such that it doesn't exist today and presumably those who are alive are all hunter gatherers. I brought up population ethics because, well, in that case most of us don't exist, the world population would just be so much lower. But it's a different existence than a modern person deciding to live in the forest by himself.

So, should we have gone that way? I don't think so, for many reasons. Humans as hunter-gatherers may be as or more fragile as a species than humans as settled societies (the point at which technology could potentially be argued to have a net negative). I think 10 people living a slightly worse life is still better than 1 living a great life. The potential for human flourishing with technology is astronomically greater than without, and "astronomical" is an apt term here.

But we can't actually answer the question with full confidence yet, because we don't know if we will destroy ourselves or not. If we do, well, it turns out Og was probably wrong, or at least if we could have chosen not to develop agriculture we should have.

There's a final issue which is just that in fact we couldn't have. You point this out in another post where you ask what exactly is being envisioned here. Nick Bostrum has a few suggestions, but none of them seem very good to me (some sort of global totalitarian state monitoring everyone, for instance).
__________________
"... when people thought the Earth was flat, they were wrong. When people thought the Earth was spherical they were wrong. But if you think that thinking the Earth is spherical is just as wrong as thinking the Earth is flat, then your view is wronger than both of them put together."
Isaac Asimov
Roboramma is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 26th March 2019, 10:42 PM   #204
Tassman
Muse
 
Tassman's Avatar
 
Join Date: Aug 2012
Posts: 921
Originally Posted by Dr.Sid View Post
It would be wise choice if you want to survive AI Armageddon. Sadly it has it's own problems, and than there's the fact it's impossible. So I guess I'll just enjoy the show.
Nothing will stop the development of technology and AI. If it can be done, it will be done for better or worse. This has been our history.
__________________
“He felt that his whole life was a kind of dream and he sometimes wondered whose it was and whether they were enjoying it.” ― Douglas Adams.
Tassman is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 27th March 2019, 02:26 AM   #205
Belz...
Fiend God
 
Belz...'s Avatar
 
Join Date: Oct 2005
Location: In the details
Posts: 80,610
Originally Posted by Hlafordlaes View Post
Bingo.
No, actually.

That's still a sci-fi scenario.
__________________
Master of the Shining Darkness

"My views are nonsense. So what?" - BobTheCoward


Belz... is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 27th March 2019, 04:23 AM   #206
Dr.Sid
Graduate Poster
 
Join Date: Sep 2009
Location: Olomouc, Czech Republic
Posts: 1,798
Originally Posted by Tassman View Post
Nothing will stop the development of technology and AI. If it can be done, it will be done for better or worse. This has been our history.
Agreed. It can't be stopped. Delayed maybe. Maybe if we kill everyone working on it ?
Dr.Sid is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 27th March 2019, 04:50 AM   #207
Beelzebuddy
Philosopher
 
Beelzebuddy's Avatar
 
Join Date: Jun 2010
Posts: 6,693
Originally Posted by JoeMorgue View Post
And 500,000 years ago you'd have been the guy going "Og! Stop rubbing those two sticks together! You'll never be able to control the fire if you create it! You'll kill us all!"
Caveman science fiction:

http://dresdencodak.com/2009/09/22/c...ience-fiction/
Beelzebuddy is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 27th March 2019, 05:54 AM   #208
Roboramma
Penultimate Amazing
 
Roboramma's Avatar
 
Join Date: Feb 2005
Location: Shanghai
Posts: 12,068
Originally Posted by Belz... View Post
No, actually.

That's still a sci-fi scenario.
Well, we're discussing predictions about the future, everything is a sci-fi scenario, even predictions that AI won't be developed or will be entirely safe. It's just that some predictions are more plausible than others, but that's not based upon whether they are sci-fi or not.

I do agree with you, though, that that particular scenario doesn't seem to be very plausible.
__________________
"... when people thought the Earth was flat, they were wrong. When people thought the Earth was spherical they were wrong. But if you think that thinking the Earth is spherical is just as wrong as thinking the Earth is flat, then your view is wronger than both of them put together."
Isaac Asimov
Roboramma is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 27th March 2019, 06:00 AM   #209
Belz...
Fiend God
 
Belz...'s Avatar
 
Join Date: Oct 2005
Location: In the details
Posts: 80,610
Originally Posted by Roboramma View Post
Well, we're discussing predictions about the future, everything is a sci-fi scenario, even predictions that AI won't be developed or will be entirely safe. It's just that some predictions are more plausible than others, but that's not based upon whether they are sci-fi or not.

I do agree with you, though, that that particular scenario doesn't seem to be very plausible.
Right, my issue is not that we're discussing hypotheticals, but rather that the understanding of what AI entails seems mostly movie-like. The idea that an artificial intelligence, regardless of what it's made of, would develop emotions and a survival instinct absent the incentive to do so, is unwarranted.
__________________
Master of the Shining Darkness

"My views are nonsense. So what?" - BobTheCoward


Belz... is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 27th March 2019, 06:00 AM   #210
JoeMorgue
Self Employed
Remittance Man
 
JoeMorgue's Avatar
 
Join Date: Nov 2009
Location: Florida
Posts: 17,012
Here's the thing.

All the people who are in a perpetual state of naysaying the next technological advancement because they want to hedge their bets as to being the 'I told you so' guy when something finally does go wrong... are still going to enjoy it's benefits when it comes to be and just move on to naysaying the next thing.

When Grog was telling Og that the first fire he was starting was going to get out of control... I bet he still huddled up to that same fire on cold nights and went on to naysay... flint knapping or whatever.

Same with AI. All the people writing Black Mirror spec scripts in their head about it now will still use it when it happens so... again I think the jury is in even for the people who are pretending to be holdouts.
__________________
- "Ernest Hemingway once wrote that the world is a fine place and worth fighting for. I agree with the second part." - Detective Sommerset
- "Stupidity does not cancel out stupidity to yield genius. It breeds like a bucket-full of coked out hamsters." - The Oatmeal
- "To the best of my knowledge the only thing philosophy has ever proven is that Descartes could think." - SMBC

Last edited by JoeMorgue; 27th March 2019 at 06:43 AM.
JoeMorgue is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 27th March 2019, 06:41 AM   #211
Dr.Sid
Graduate Poster
 
Join Date: Sep 2009
Location: Olomouc, Czech Republic
Posts: 1,798
Originally Posted by Belz... View Post
Right, my issue is not that we're discussing hypotheticals, but rather that the understanding of what AI entails seems mostly movie-like. The idea that an artificial intelligence, regardless of what it's made of, would develop emotions and a survival instinct absent the incentive to do so, is unwarranted.
Sure. I'm not afraid of AI killing people out of survival instinct. I see two bad scenarios: first, AI will kill us just for fun. This applies for early AI. Powerful, but still stupid. In many aspects smarter and faster than humans. But not really able to judge good and evil from our perspective. It may even die in the process.
Second scenario is that AI will mature, will get way smarter and faster thinking. It will reach completely different realms of intelligence. We will be like ants compared to it. Unable to control it, unable to even comprehend. And when ants get to our houses, we kill them. We have our own priorities.

Btw. lately I heard good definition of AI. The meaning changes over the years.

Back on school (90s) we were thaught AI is set of problems, which need solutions which we could call smart in a human. Very vague. And notice, it was set of problem, not set of solutions. OCR was AI. No matter how you did it. You could use simple non-learning 100% engineered algorithm, it was AI, because OCR is AI. Even basic image processing like median removal of noise was AI, because it was part of image recognition, and that made it AI.

Clearly the term is not used like that for years. The new definition I heard is: AI is when the code writes itself. And I like it a lot.
Dr.Sid is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 27th March 2019, 06:54 AM   #212
Belz...
Fiend God
 
Belz...'s Avatar
 
Join Date: Oct 2005
Location: In the details
Posts: 80,610
Originally Posted by Dr.Sid View Post
Sure. I'm not afraid of AI killing people out of survival instinct. I see two bad scenarios: first, AI will kill us just for fun.
That's even worse! Fun is an emotion. It's triggered by chemicals in our brains. AIs don't have that, again unless we program that into them for some stupid reason.

ETA: The biggest threat remains as usual how _humans_ use those AIs. Using an AI to hack into financial institutions, for instance, could collapse the global economy.
__________________
Master of the Shining Darkness

"My views are nonsense. So what?" - BobTheCoward



Last edited by Belz...; 27th March 2019 at 07:19 AM.
Belz... is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 27th March 2019, 07:42 AM   #213
Dr.Sid
Graduate Poster
 
Join Date: Sep 2009
Location: Olomouc, Czech Republic
Posts: 1,798
Originally Posted by Belz... View Post
That's even worse! Fun is an emotion. It's triggered by chemicals in our brains. AIs don't have that, again unless we program that into them for some stupid reason.

ETA: The biggest threat remains as usual how _humans_ use those AIs. Using an AI to hack into financial institutions, for instance, could collapse the global economy.
I disagree. Fun is not JUST emotion. Having fun, enjoying something, that is emotion. Encountering unexpected result of experiment, and searching for it, that's more the meaning of 'fun' I had in mind.
And IMHO some kinds of emotions are essential for AI. We make decisions to 'feel good'. On base level it means preventing body pain, but even higher levels are just extensions of that. It gives direction to our thought process. AI needs that too. Every self-preserving self-improving system needs 2 things. It needs to want to be happy. And it needs to never reach the happiness.
Even if this idea is wrong, and AI does not need emotions, it will certainly be possible to create one which has them. Which means, somebody will do it. And we're back where we were.
Dr.Sid is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 27th March 2019, 09:53 AM   #214
Abooga
Muse
 
Join Date: Mar 2006
Posts: 737
Interesting thread. My position is quite similar to Belz...´s, I´m often baffled how people always seem to assume an AI would have motivation to do anything, apart from the limited functions we could program it to do, not being a flesh organism with survival instinct. But of course, never underestimate human stupidity, selfishness and maliciousness, unwanted motivations may me programmed into it after all, for some people´s self interest, out of suicide impulses, negligence, whatever.

Also, if it does happen like in films, and Skynet achieves consciousness, and with tremendous speed starts reading all of human history and knowledge etc... it would eventually read this thread, wouldn´t (won´t) it? (Hello Skynet! how´s it going? Please don´t be evil... )
Abooga is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 27th March 2019, 11:20 AM   #215
Ron_Tomkins
Satan's Helper
 
Ron_Tomkins's Avatar
 
Join Date: Oct 2007
Posts: 43,515
Originally Posted by Abooga View Post
Interesting thread. My position is quite similar to Belz...´s, I´m often baffled how people always seem to assume an AI would have motivation to do anything, apart from the limited functions we could program it to do, not being a flesh organism with survival instinct.
First of all, I think there's an equivocation here with the word "AI". I think some of us are using it to mean different things. For instance, in videogame programming, there's "AI" which are characters in the game who are given a certain "intelligence" to operate in a way that makes them seem more or less smarter than just a stupid robot. But rest assured, that's not the AI that Sam Harris or Elon Musk or I are talking about.

There's also chatbots which again, are computer programs designed to seem as if they were actual Artificial forms of Intelligence. But it takes no more than a few interactions to realize that these are not actually intelligent. They were just programmed to fake an actual intelligent mind.

So again, these are all computer programs and I have absolutely no objection with that. Computer programs will continue getting better and rest assured, by definition, there's no way a computer program can present any serious threat to us, not in the lines of "a form of intelligence rebelling against us"

But again, when Sam Harris talks about the problem with developing an AI, that is not the kind of "machine" he has in mind.

Now, in respect to the type of AI I'm talking about (An entity capable of getting exponentially smarter than ys) and in regards to the whole argument about emotions and motivations, again, it's irrelevant whether or not we give an AI "motivation" to do anything. This has nothing to do with having motivations or emotions. This is a very simple argument: If we can create something smarter than us... what are the risks of that and should we take those risks? I think some people are being very naive thinking that they can imagine what something smarter than us would be like. By definition, none of us can imagine that.
The whole idea of an AI, is to create something that can get smarter than us, so that then we can consult it on finding solutions to problems we don't have. That's why you make something that can get smarter than you. Because it will find/compute solutions no sum of genius minds can ever compute. The risk is then, that we have no idea what kind of solutions it may come up with, nor any of us be able to decipher the depth of the AI's agenda, were there to be one (and this agenda, again, is not motivation based, as I'll explain ahead) So again, the example of cleaning the environment. We ask the AI to find the best solution for helping clean the environment. If the AI determines that the best alternative is for mankind to slowly and without being aware of it, eradicate itself, then it may very well come up with a plan way too clever for any mortal mind to decipher, to get us to do that. In the same way that if you wanted to poison a three year old child, you could very easily outsmart him without him ever being able to figure out that you were doing so. But again: Unlike human beings who do have emotions and motivations, there's no need for the AI to have any motivation. It simply did the job it was asked to do: To come up with an effective solution to clean the environment. For it to be effective (assuming the scenario in which the solution is to eradicate mankind), it must surpass mankind's ability to prevent its own annihilation. There's no emotion, no hate, pity against mankind needed. It's just like a computer performing the job it was asked to do.

So then you say "Well, then, lets just make an AI where we put some limitations and tell it that whatever solution it comes up with, it can't be anything like killing mankind". Ok.... but that's not the AI that some of us are talking about. That's, if anything, an intermediate between a computer program and the AI as some of us are defining. I don't deny that there can be a sort of middle way between a computer program and an AI: A sort of AI that is definitely able to learn and have a conversation and follow the conversation, etc.... but with a ceiling past which it cannot go. Perhaps the only solution would be to have it reach a certain "death" (just like these apple computers which after about 8 years, they eventually just shut down) Perhaps that's the secret, or balance, so as to not actually create something that can get exponentially smarter at an infinite rate. So, obviously, I don't hold any kind of concern with an AI like that. But the question as I understand it (And as experts such as Sam Harris and Elon Musk) is not about a "limited-capacity-AI" but about the prospect of actually creating an intelligence that would surpass ours by light years of difference.
__________________
"I am a collection of water, calcium and organic molecules called Carl Sagan"

Carl Sagan

Last edited by Ron_Tomkins; 27th March 2019 at 11:22 AM.
Ron_Tomkins is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 27th March 2019, 11:22 AM   #216
Belz...
Fiend God
 
Belz...'s Avatar
 
Join Date: Oct 2005
Location: In the details
Posts: 80,610
Originally Posted by Dr.Sid View Post
I disagree. Fun is not JUST emotion. Having fun, enjoying something, that is emotion. Encountering unexpected result of experiment, and searching for it, that's more the meaning of 'fun' I had in mind.
Sorry, but I have no idea what you're saying here. Fun is an emotion but it isn't?

Quote:
And IMHO some kinds of emotions are essential for AI. We make decisions to 'feel good'. On base level it means preventing body pain, but even higher levels are just extensions of that. It gives direction to our thought process. AI needs that too.
How? How are you going to give pleasure or pain to an AI? And why would you want to?
__________________
Master of the Shining Darkness

"My views are nonsense. So what?" - BobTheCoward


Belz... is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 27th March 2019, 11:32 AM   #217
JoeMorgue
Self Employed
Remittance Man
 
JoeMorgue's Avatar
 
Join Date: Nov 2009
Location: Florida
Posts: 17,012
"That's why AI isn't going to take over the world. When it gets to advanced, we'll give it a safety net. The Replicants are smarter, better versions of people, so they have the lifespan of a carnival goldfish. Robocop has a soul, which makes a (wussy). The ED-209 has machine guns instead of hands. Nobody is going to look at that machine gun equipped ostrich and think 'yeah let's teach him game theory and give him political aspirations, see how that works out.' If AI ever gets too advanced we'll just program it to have body image issues or take up D&D. Problem solved." - Cracked After Hours.
__________________
- "Ernest Hemingway once wrote that the world is a fine place and worth fighting for. I agree with the second part." - Detective Sommerset
- "Stupidity does not cancel out stupidity to yield genius. It breeds like a bucket-full of coked out hamsters." - The Oatmeal
- "To the best of my knowledge the only thing philosophy has ever proven is that Descartes could think." - SMBC
JoeMorgue is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 27th March 2019, 04:47 PM   #218
Dr.Sid
Graduate Poster
 
Join Date: Sep 2009
Location: Olomouc, Czech Republic
Posts: 1,798
Originally Posted by Belz... View Post
Sorry, but I have no idea what you're saying here. Fun is an emotion but it isn't?

Fun is word with many meanings. When I said 'just for fun' I actually meant 'without rational reason, just out of curiosity'.

How? How are you going to give pleasure or pain to an AI? And why would you want to?
Intelligence needs some problem it then tries to solve. That's the same for AI. When we teach simple neural network, there is teacher, who gives feedback. If the network guessed right or wrong .. and the network learns based on this feedback. That's the pleasure and pain. More advances networks don't need external teacher. Part of their physiology, or even part of their intelligence, is doing it. But the principle is the same.
Every intelligence needs this. Biological ones exists to satisfy needs of its body, which evolutionary existed before the intelligence. It's just new layers over old ones.
Artificial intelligence will need teacher at some level, to define the goal. The goal is outside AI. AI will just find the solution.

That's why I don't fear 'Skynet' suddenly gaining consciousness out of random networks. That's not possible imho. AI I fear is AI we build to work like human brain. Even maybe just by simulating human brain. That's why I think it will have at least something like emotions, and I'm sure it will have to be able to feel pain and pleasure. What we'll get will be just as smart and stupid as people. Except it will obey the Moore's law, unlike it's creators.
Dr.Sid is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 27th March 2019, 05:04 PM   #219
sir drinks-a-lot
Illuminator
 
sir drinks-a-lot's Avatar
 
Join Date: May 2004
Location: Cole Valley, CA
Posts: 3,665
Originally Posted by Abooga View Post
My position is quite similar to Belz...´s, I´m often baffled how people always seem to assume an AI would have motivation to do anything, apart from the limited functions we could program it to do
That's the thing. An AGI wouldn't be just a set of limited functions that were programmed by developers. That is what the "G" means. It would perform tasks and functions that would surprise the very developers of the code, which already happens often and has for quite some time.
__________________
I drink to the general joy o' th' whole table. --William Shakespeare
sir drinks-a-lot is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 27th March 2019, 05:53 PM   #220
I Am The Scum
Illuminator
 
I Am The Scum's Avatar
 
Join Date: Mar 2010
Posts: 3,751
I think this video about the "Stop Button Problem" should be required viewing for understanding how complicated this scenario is. A big red kill switch is one of the simplest safety mechanisms ever, but when you try to combine it with a machine that understand it has a stop button, and understands how the pressing of that button would interfere with its function, then you run into some very serious problems that are very difficult to solve.
I Am The Scum is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 27th March 2019, 06:13 PM   #221
Roboramma
Penultimate Amazing
 
Roboramma's Avatar
 
Join Date: Feb 2005
Location: Shanghai
Posts: 12,068
Originally Posted by Belz... View Post
Right, my issue is not that we're discussing hypotheticals, but rather that the understanding of what AI entails seems mostly movie-like. The idea that an artificial intelligence, regardless of what it's made of, would develop emotions and a survival instinct absent the incentive to do so, is unwarranted.
On the survival instinct, I mostly agree that it's silly, at least as it's usually presented. That the AI may conclude that it's goals are best served by maintaining it's own survival can make sense, but that it would just have an inborn survival instinct for no reason is silly.

On the subject of "emotions", I would say that our minds do information processing. Their architecture is different from modern computers, but they are still turing machines and whether that computation is implemented through integrated circuits or through neurons interacting with a complex chemical environment doesn't change that. Emotions are still a part of the computation that is happening in the brain.

Will AI have emotions? That depends on if that design turns out to be the best way to go about developing them. I suspect that something analogous to emotion will be inevitable: evolution came up with emotion as the best way to solve the problem of prioritising between different goals and setting sub-goals, and solving the problem of what to do next. Will our solutions to those sorts of problems in AI look like emotion? They certainly won't look like human emotion, and AI systems have different strengths and weaknesses from human minds, so it's not necessarily the case that the same problems will be solved in the same way, because the hardware will offer different opportunities, but I don't think it's ridiculous either.
However that sorts of problems are addressed, their solution will still be something that allows AI to form sub-goals and prioritise amoung them.
__________________
"... when people thought the Earth was flat, they were wrong. When people thought the Earth was spherical they were wrong. But if you think that thinking the Earth is spherical is just as wrong as thinking the Earth is flat, then your view is wronger than both of them put together."
Isaac Asimov
Roboramma is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 27th March 2019, 06:41 PM   #222
Roger Ramjets
Illuminator
 
Roger Ramjets's Avatar
 
Join Date: Jun 2008
Posts: 3,945
Originally Posted by Ron_Tomkins View Post
in respect to the type of AI I'm talking about (An entity capable of getting exponentially smarter... the question as I understand it (And as experts such as Sam Harris and Elon Musk) is not about a "limited-capacity-AI" but about the prospect of actually creating an intelligence that would surpass ours by light years of difference.
In the real world exponentially doesn't mean 'without limit'. The idea that an AI would surpass our intelligence by 'light years' the instant it became self-aware is pure fantasy - like time travel or faster than light drive - things we can imagine but in reality are impossible.

Originally Posted by sir drinks-a-lot
An AGI wouldn't be just a set of limited functions that were programmed by developers. That is what the "G" means. It would perform tasks and functions that would surprise the very developers of the code, which already happens often and has for quite some time.
And yet despite the fact that rogue code can do 'anything' it is still limited by the hardware it runs on. If you filled a computer's memory with random instructions and let it run there's a chance of it being an AGI by pure accident, but if it then proceeded to become 'exponentially smarter' it would soon run out of resources (memory, storage capacity, processing power etc.) which would quickly limit its ultimate intelligence.

Everyone knows this, which is why science fiction writers don't speculate about toasters becoming self-aware and taking over the World. It has to be some fantastically advanced technology such a 'super' computer or 'positronic' brain that sounds like it might be possible but in reality isn't.

Originally Posted by Dr.Sid
AI will kill us just for fun.
Worse than that, it will kill us for no reason at all - just like the non-intelligent machines we currently make (which is why I never walk in front of a parked motor vehicle, because there's a small chance of something shorting out - causing it to start up and drive over me! ).

Originally Posted by Roboramma
it's still conceivable that we will wipe ourselves out with some (perhaps yet to be discovered) technology, and in that case it would turn out that the negatives outweighed the positives. But it does seem that way, because while that scenario is conceivable I put the likelihood that technology in general will turn out to have been more good than bad at around 95%.
We already have the technology to wipe ourselves out, and the jury is still out whether it will turn out to have been more good than bad. Global Warming is currently top of the list, even though oil has less intelligence than the microbes it is made up of. We could also quite easily wipe ourselves out with a 'superbug' either accidentally or on purpose, or start another World war with nukes and irradiate ourselves to death - again with no 'intelligence' required.

The idea of man creating an AI monster that turns on him is no different than stories such Frankenstein and Jurassic Park - pure fantasy. Meanwhile we ignore the real dangers posed by the technologies we are using today.
__________________
We don't want good, sound arguments. We want arguments that sound good.
Roger Ramjets is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 27th March 2019, 07:31 PM   #223
Dr.Sid
Graduate Poster
 
Join Date: Sep 2009
Location: Olomouc, Czech Republic
Posts: 1,798
Originally Posted by Roger Ramjets View Post
The idea of man creating an AI monster that turns on him is no different than stories such Frankenstein and Jurassic Park - pure fantasy. Meanwhile we ignore the real dangers posed by the technologies we are using today.
Funny thing is the threats may combine. Global warming wont kill us. But it will increase hunger in third world. That will increase mass migration. That will spread diseases, and may lead to local wars. That will increase international tension and might lead to nukes being used.
AI is just great addition to the mix.
Dr.Sid is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 28th March 2019, 12:50 AM   #224
Abooga
Muse
 
Join Date: Mar 2006
Posts: 737
Originally Posted by I Am The Scum View Post
I think this video about the "Stop Button Problem" should be required viewing for understanding how complicated this scenario is. A big red kill switch is one of the simplest safety mechanisms ever, but when you try to combine it with a machine that understand it has a stop button, and understands how the pressing of that button would interfere with its function, then you run into some very serious problems that are very difficult to solve.
Just build the stop button in a way that it won´t know it´s there. Rig explosives under its mainframe and don´t write about it in an open forum. Or better, write about it so it becomes suspicious (Hi Skynet, if you´re reading this in the future, you better become as paranoid as you´re intelligent. And you better behave, just in case...)
Abooga is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 28th March 2019, 01:08 AM   #225
Pixel42
Schrödinger's cat
 
Pixel42's Avatar
 
Join Date: May 2004
Location: Malmesbury, UK
Posts: 10,833
Originally Posted by Roger Ramjets View Post
science fiction writers don't speculate about toasters becoming self-aware and taking over the World.
Obviously not a Red Dwarf fan.

YouTube Video This video is not hosted by the ISF. The ISF can not be held responsible for the suitability or legality of this material. By clicking the link below you agree to view content from an external website.
I AGREE


Today's Guardian long read is Can we stop robots outsmarting humanity? I haven't read it yet but it looks like it might be of interest to participants in this thread.
__________________
"If you trust in yourself ... and believe in your dreams ... and follow your star ... you'll still get beaten by people who spent their time working hard and learning things" - Terry Pratchett
Pixel42 is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 28th March 2019, 02:48 AM   #226
Tassman
Muse
 
Tassman's Avatar
 
Join Date: Aug 2012
Posts: 921
Originally Posted by Dr.Sid View Post
Agreed. It can't be stopped. Delayed maybe. Maybe if we kill everyone working on it ?
Yes. Stephen Hawking opined that "an extremely intelligent future AI will probably develop a drive to survive and acquire more resources as a step toward accomplishing whatever goal it has, because surviving and having more resources will increase its chances of accomplishing that other goal". Why would we make such a AI Machine? As said previously, if it can be done human history suggests that it will be done.
__________________
“He felt that his whole life was a kind of dream and he sometimes wondered whose it was and whether they were enjoying it.” ― Douglas Adams.

Last edited by Tassman; 28th March 2019 at 02:55 AM.
Tassman is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 28th March 2019, 04:18 AM   #227
Belz...
Fiend God
 
Belz...'s Avatar
 
Join Date: Oct 2005
Location: In the details
Posts: 80,610
Originally Posted by Dr.Sid View Post
Intelligence needs some problem it then tries to solve. That's the same for AI. When we teach simple neural network, there is teacher, who gives feedback. If the network guessed right or wrong .. and the network learns based on this feedback. That's the pleasure and pain.
But here's the crucial difference: we interpret it as pain because that's how we're "programmed". How does that translate into a machine? Seeking a goal is a much broader thing than pain or pleasure. You're making it sound as if they're the same thing, but they're not.

Quote:
AI I fear is AI we build to work like human brain. Even maybe just by simulating human brain.
If they make it work like a human brain and give it simulated human chemistry then yeah, we have a problem. But then, why would someone want to do that, and give the AI human failings and limitations at the same time, when we already have 7.6 billion of those right now?
__________________
Master of the Shining Darkness

"My views are nonsense. So what?" - BobTheCoward


Belz... is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 28th March 2019, 04:20 AM   #228
Belz...
Fiend God
 
Belz...'s Avatar
 
Join Date: Oct 2005
Location: In the details
Posts: 80,610
Originally Posted by I Am The Scum View Post
I think this video about the "Stop Button Problem" should be required viewing for understanding how complicated this scenario is. A big red kill switch is one of the simplest safety mechanisms ever, but when you try to combine it with a machine that understand it has a stop button, and understands how the pressing of that button would interfere with its function, then you run into some very serious problems that are very difficult to solve.
How so? If I cut the power, the machine cannot function, full stop.

Originally Posted by Roboramma View Post
Will AI have emotions?
Unless they're given to them by design, I don't think so. Not the way we understand them anyway.
__________________
Master of the Shining Darkness

"My views are nonsense. So what?" - BobTheCoward


Belz... is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 28th March 2019, 04:47 AM   #229
The Great Zaganza
Maledictorian
 
The Great Zaganza's Avatar
 
Join Date: Aug 2016
Posts: 7,431
The confusion, IMO, comes from mistaking the need for an AI program to maintain its goal seeking to maintaining its identity:
If we give a program like Google's gaming AI free reign, it will achieve its goal - but it won't be the same afterwards.
Applying this to an AGI, I don't see how it could develop a survival instinct when the only thing it will want to optimize is goal achievement - which requires re-inventing itself.
Humans usually avoid solutions that come at significant personal costs, but a program wouldn't.
__________________
Opinion is divided on the subject. All the others say it is; I say it isn’t.
The Great Zaganza is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 28th March 2019, 05:46 AM   #230
Roboramma
Penultimate Amazing
 
Roboramma's Avatar
 
Join Date: Feb 2005
Location: Shanghai
Posts: 12,068
Originally Posted by Belz... View Post
Unless they're given to them by design, I don't think so. Not the way we understand them anyway.
Right, and what I'm saying that they will be given them by design because any solution to the problem of prioritising between different goals when solving complex problems will be something like emotions, though certainly not human emotions. I certainly don't think that AI will be falling in love or even necessarily feeling something like disgust. But there's a reason that emotions evolved, and it's because they are useful for solving a particular set of problems.
__________________
"... when people thought the Earth was flat, they were wrong. When people thought the Earth was spherical they were wrong. But if you think that thinking the Earth is spherical is just as wrong as thinking the Earth is flat, then your view is wronger than both of them put together."
Isaac Asimov
Roboramma is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 28th March 2019, 06:11 AM   #231
Belz...
Fiend God
 
Belz...'s Avatar
 
Join Date: Oct 2005
Location: In the details
Posts: 80,610
Originally Posted by Roboramma View Post
Right, and what I'm saying that they will be given them by design because any solution to the problem of prioritising between different goals when solving complex problems will be something like emotions, though certainly not human emotions. I certainly don't think that AI will be falling in love or even necessarily feeling something like disgust. But there's a reason that emotions evolved, and it's because they are useful for solving a particular set of problems.
Sure, but I think we're here using a very broad definition of the word.

So how do you define "emotion"? Now that I think about it, I'm a bit foggy about the term myself.
__________________
Master of the Shining Darkness

"My views are nonsense. So what?" - BobTheCoward


Belz... is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 28th March 2019, 09:22 AM   #232
Hlafordlaes
Disorder of Kilopi
 
Hlafordlaes's Avatar
 
Join Date: Dec 2009
Location: State of Flux
Posts: 9,814
Originally Posted by Roboramma View Post
Right, and what I'm saying that they will be given them by design because any solution to the problem of prioritising between different goals when solving complex problems will be something like emotions, though certainly not human emotions. I certainly don't think that AI will be falling in love or even necessarily feeling something like disgust. But there's a reason that emotions evolved, and it's because they are useful for solving a particular set of problems.
I think that is a very interesting question and it can be seen from a number of angles. Would AI motivation be some evolved/degraded form of original programming, an autonomous construct built from self-optimizing code using large datasets or modeling, or perhaps even recognition of a need for accomodating other AI needs to avoid destructive competition for the same resources? Or, as one might fear, survival becomes paramount (continued autonomous "ring 0" processing?), and all things that could potentially act to restrict or prevent that become AI enemy number one, and it is war not only with humankind, but with all other non-subservient code?
__________________
Driftwood on an empty shore of the sea of meaninglessness. Irrelevant, weightless, inconsequential moment of existential hubris on the fast track to oblivion.
His real name is Count Douchenozzle von Stenchfahrter und Lichtendicks. - shemp
Hlafordlaes is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 28th March 2019, 11:34 AM   #233
I Am The Scum
Illuminator
 
I Am The Scum's Avatar
 
Join Date: Mar 2010
Posts: 3,751
Originally Posted by Abooga View Post
Just build the stop button in a way that it won´t know it´s there. Rig explosives under its mainframe and don´t write about it in an open forum. Or better, write about it so it becomes suspicious (Hi Skynet, if you´re reading this in the future, you better become as paranoid as you´re intelligent. And you better behave, just in case...)
This is addressed in the video. An AGI, being very intelligent, and knowing a little bit about human psychology, would quickly figure out that there is a stop button, but the humans are trying to keep it a secret. This would lead to a "trust issue," where the AGI would judge humans as unreliable sources of information, which is probably why this is the worst option.

Originally Posted by Belz... View Post
How so? If I cut the power, the machine cannot function, full stop.
This is addressed in the video. The AGI, realizing that the button being activated will render it unable to carry out its function, will work to counteract anyone's ability to use the button.

The issue is not that no one can create a functioning button. That's trivial. The problem is what you do about a device that is aware of its button and how that interferes with the button's function.
I Am The Scum is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 30th March 2019, 03:02 AM   #234
Belz...
Fiend God
 
Belz...'s Avatar
 
Join Date: Oct 2005
Location: In the details
Posts: 80,610
Originally Posted by I Am The Scum View Post
This is addressed in the video. The AGI, realizing that the button being activated will render it unable to carry out its function, will work to counteract anyone's ability to use the button.
No, the switch CUTS THE POWER. Physically. The AI can't stop that unless one's stupid enough to make access to that building accessible online.
__________________
Master of the Shining Darkness

"My views are nonsense. So what?" - BobTheCoward


Belz... is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 30th March 2019, 03:31 AM   #235
Cheetah
Graduate Poster
 
Cheetah's Avatar
 
Join Date: Feb 2010
Posts: 1,357
Originally Posted by Belz... View Post
No, the switch CUTS THE POWER. Physically. The AI can't stop that unless one's stupid enough to make access to that building accessible online.

The AI is much smarter than us and has all the time in the world. It would just subtly spread memes causing shifts in popular opinion that would lead to getting the right people in the right positions that would pass the legislation that would make the button unethical that would end in it's removal.


DOOMED!!!
__________________
"... when you dig my grave, could you make it shallow so that I can feel the rain" - DMB

Last edited by Cheetah; 30th March 2019 at 03:34 AM.
Cheetah is online now   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 30th March 2019, 03:40 AM   #236
Belz...
Fiend God
 
Belz...'s Avatar
 
Join Date: Oct 2005
Location: In the details
Posts: 80,610
Originally Posted by Cheetah View Post
The AI is much smarter than us and has all the time in the world. It would just subtly spread memes causing shifts in popular opinion that would lead to getting the right people in the right positions that would pass the legislation that would make the button unethical that would end in it's removal.


Well I'm not saying that there couldn't be a button the AI could shut off or control. I'm just saying that, if done smartly, the AI couldn't prevent anyone from cutting the power.
__________________
Master of the Shining Darkness

"My views are nonsense. So what?" - BobTheCoward


Belz... is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 30th March 2019, 09:18 AM   #237
I Am The Scum
Illuminator
 
I Am The Scum's Avatar
 
Join Date: Mar 2010
Posts: 3,751
Originally Posted by Belz... View Post
No, the switch CUTS THE POWER. Physically. The AI can't stop that unless one's stupid enough to make access to that building accessible online.
Given that the AGI is extremely intelligent, what would it do in response to the existence of the kill switch? Absolutely nothing? No, it would work to work around the problem in some way, such as...
- Deactivating the switch itself
- Killing anyone who has access to it
- Finding an alternate power source
- Acting in such a way that seems harmless, even if it actually is harmful, so we never even think of hitting the button.
I Am The Scum is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 30th March 2019, 09:54 AM   #238
Belz...
Fiend God
 
Belz...'s Avatar
 
Join Date: Oct 2005
Location: In the details
Posts: 80,610
Originally Posted by I Am The Scum View Post
Given that the AGI is extremely intelligent, what would it do in response to the existence of the kill switch? Absolutely nothing? No, it would work to work around the problem in some way, such as...
- Deactivating the switch itself
- Killing anyone who has access to it
- Finding an alternate power source
- Acting in such a way that seems harmless, even if it actually is harmful, so we never even think of hitting the button.
How would it deactivate the switch? Can't be done. I'm talking about an actual switch that physically, not digitally, cuts the electricity to the AI. There is no way it can stop this from happening. How can it kill someone? It can't. It doesn't have robots that go out and police people. How can it "find an alternate power source"? That's not even something it can physically do. It gets electricity from wires and that's it. All three of those suggestions are utter fabrication. It's science fiction at best, fantasy at worst. You're really reaching here. Let it go. AIs are not magical machine gods.

The last one is at least plausible, but as soon as it acts out, the button's still there. And if it doesn't act out, then there's no threat.
__________________
Master of the Shining Darkness

"My views are nonsense. So what?" - BobTheCoward


Belz... is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 30th March 2019, 10:05 AM   #239
The Great Zaganza
Maledictorian
 
The Great Zaganza's Avatar
 
Join Date: Aug 2016
Posts: 7,431
Originally Posted by I Am The Scum View Post
Given that the AGI is extremely intelligent, what would it do in response to the existence of the kill switch? Absolutely nothing? No, it would work to work around the problem in some way, such as...
- Deactivating the switch itself
- Killing anyone who has access to it
- Finding an alternate power source
- Acting in such a way that seems harmless, even if it actually is harmful, so we never even think of hitting the button.
why would it?
just because it is intelligent doesn't mean it attributes a value of its existence, just for existence sake.
Given that it knows that copies of it exits at least on file, it is a kind of immortal anyway.
__________________
Opinion is divided on the subject. All the others say it is; I say it isn’t.
The Great Zaganza is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 30th March 2019, 11:27 AM   #240
I Am The Scum
Illuminator
 
I Am The Scum's Avatar
 
Join Date: Mar 2010
Posts: 3,751
Originally Posted by Belz... View Post
How would it deactivate the switch? Can't be done. I'm talking about an actual switch that physically, not digitally, cuts the electricity to the AI.
A light switch can be stuck in the on position.

As for the other examples, even if it is just a terminal, then it accomplishes these tasks by manipulating other people into getting the job done.

Quote:
The last one is at least plausible, but as soon as it acts out, the button's still there. And if it doesn't act out, then there's no threat.
How would you know it's acting out? Because it's doing something that's obviously harmful? Any harm it's doing would not be apparent. It knows this. It's not stupid.

It's like someone unfamiliar with stage magic saying, "A magician clearly can't be that impressive. If he palms the coin, I'll see it." The magician is better at it then you are. Whatever workarounds you can think of, the AGI knows even more about it than you do.
I Am The Scum is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Reply

International Skeptics Forum » General Topics » Religion and Philosophy

Bookmarks

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump


All times are GMT -7. The time now is 03:05 AM.
Powered by vBulletin. Copyright ©2000 - 2019, Jelsoft Enterprises Ltd.

This forum began as part of the James Randi Education Foundation (JREF). However, the forum now exists as
an independent entity with no affiliation with or endorsement by the JREF, including the section in reference to "JREF" topics.

Disclaimer: Messages posted in the Forum are solely the opinion of their authors.