ISF Logo   IS Forum
Forum Index Register Members List Events Mark Forums Read Help

Go Back   International Skeptics Forum » General Topics » Religion and Philosophy
 


Welcome to the International Skeptics Forum, where we discuss skepticism, critical thinking, the paranormal and science in a friendly but lively way. You are currently viewing the forum as a guest, which means you are missing out on discussing matters that are of interest to you. Please consider registering so you can gain full use of the forum features and interact with other Members. Registration is simple, fast and free! Click here to register today.
Reply
Old 2nd April 2019, 06:39 PM   #321
Belz...
Fiend God
 
Belz...'s Avatar
 
Join Date: Oct 2005
Location: In the details
Posts: 83,714
Originally Posted by I Am The Scum View Post
Anything that can be done by a human on a computer, can be done even more effectively by an AGI.
Although the above is correct, you're still not doing what I proposed.

It seems more and more obvious that you have a feeling that your fears are justified, but like someone deathly afraid of elevators, you can't really articulate why in a way that fits the facts. And in fact you're aware that you can't, which is why you're staying away from the demonstration and sticking with vague premonitions.

Yes, anything that can be done by a human with a computer can be done by an AI. Things that can't be done by a human with a computer, however, generally can't be done by an AI either. Do you think a human can hack the royal mint and print money for himself? Neither can an AI.
__________________
Master of the Shining Darkness

"My views are nonsense. So what?" - BobTheCoward


Belz... is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 2nd April 2019, 07:14 PM   #322
I Am The Scum
Illuminator
 
I Am The Scum's Avatar
 
Join Date: Mar 2010
Posts: 3,987
Originally Posted by Belz... View Post
Do you think a human can hack the royal mint and print money for himself?
I have no idea. I don't know what safeguards the mint has in place for such an occurrence.
I Am The Scum is online now   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 2nd April 2019, 11:14 PM   #323
Cheetah
Graduate Poster
 
Cheetah's Avatar
 
Join Date: Feb 2010
Posts: 1,696
If the AI is not smart enough to tell what a real stamp is how could it possibly be smart enough to hack anything and more efficiently than a human at that?
How could an AI that is not sentient be aware of a threat to itself if it doesn't even know it is itself? It wouldn't recognize humans as independent actors, never mind model our behavior and come up with fiendish plans to use it against us.


The idea that this AGI is just going to magically pop into existence is absurd. It's similar to the blank slate, the old idea that the brain of a baby is blank and everything is learnt.
Meat brains are very costly to maintain. The size of an animal's brain is very much a balance between survival advantages and energy consumption. Evolution has had a long time to hone those meaty NNs and to pack as much useful processing into an affordable number of brain cells.
I bet you couldn't make a human equivalent AGI NN without using a similar number of nodes and connections. That is 100 billion nodes and 100 trillion connections. I also bet the NN would need a lot of very specific and intricate structures the equivalent of what we have in our brains. It will need a visual cortex, an auditory cortex etc. etc. Those are not random structures your brain learns to use, the are incredibly complicated and their basic structure coded in your DNA.

You might be able to trim down on the size of the NN, it might not need a motor cortex the size of a human's or as acute vision or hearing, surely it wouldn't need to smell. But then again it is probably necessary (not smell, we use hardly any brain for smelling) if you want to make it as intelligent as a human. Humans employ their inner eye to imagine situations and solve problems, can you have an inner 3D model of the world without a visual cortex? Could you parse and be creative with language without language centers? Then there is our ability to understand and read the moods and intentions of other people, very, very complicated. All these sub structures of the brain are incredibly intricate, they don't just pop into existence.

Humans can solve general problems because all these specialized structures work together. An AI will need them to be generally intelligent.

We should be able to simulate a visual cortex, or whatever bits we want to in isolation long before we have the hardware to do a complete brain. We will also be able to simulate smaller stupider brains long before a human equivalent is possible. Hopefully we will have a pretty good idea how a brain works by the time making one much smarter than humans becomes possible.


You are also forgetting that even if the AI has all the equivalent structures present in the human brain it still has to learn how to use them. It will start off like a baby. You will need to teach it to talk etc. If it turns out to be a real mean baby, switch it off and try something different.
__________________
"... when you dig my grave, could you make it shallow so that I can feel the rain" - DMB

Last edited by Cheetah; 2nd April 2019 at 11:16 PM.
Cheetah is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 3rd April 2019, 02:18 AM   #324
Belz...
Fiend God
 
Belz...'s Avatar
 
Join Date: Oct 2005
Location: In the details
Posts: 83,714
Originally Posted by I Am The Scum View Post
I have no idea. I don't know what safeguards the mint has in place for such an occurrence.
Not having the printing press be online would not only be a start, it would be an end, do you agree?

Here's a summary of the discussion. Let's see where we can agree. I think we've all more-or-less agreed already that the risk an AI posts is going to either be:

A) Poor or malicious use of an AI by humans. I think this is a given, as it's true for all technologies. If you misuse a knife, you might hurt or kill someone or damage some property; if you misuse a nuke you might start a global nuclear war, etc. So AI have the exact same issue.

B) Social engineering by the AI to make humans serve its purposes. I think we've agreed to that one as well.

C) Unpredictable or out-of-bounds AI actions that can cause damage or issues. I think this is the one under contention, as the question is: what are the limits of what the AI can do. Clearly, an AI car can't raise the radio antenna unless a mechanism exists to raise it AND it's connected to the AI. That sort of thing. Limitations are both hardware (like the antenna example) and software (AIs have some core code that can't be transcended, limiting what they can do.)

(C) is why I brought up the off switch. If the stamp-collecting AI misbehaves, I can unplug the computer from the socket, and the AI ceases to function; or I can unplug the computer from the network, and I can now deal with the AI locally. The same is true of any AI unless it's distributed over a wide network.

But speaking of the stamp collector AI, it's hard to fathom that its maker would forget to put in a clause that asks 1) for real stamps and 2) for collecting them within the bounds of the law. Otherwise the issue is A), not C). Regardless, and once again, every AI has limits to what it can do. It might be extremely powerful in terms of intelligence, but it can't access things that are not online directly, nor can it, to use a ridiculous example, transform my car into its robot form and punch me in the face.

Still, the stamp collector AI is not a general artificial intelligence, which is the topic of the thread. It's got a very specific goal and function, which is why it can be so single-minded. A general AI would have temporary goals, such as "computer, find all available information on supernovae including articles, papers and opinions, collate it, and summarize it for me." The AI would do the search for a while and then offer its conclusions. Once again, I assume it's been designed to operate within certain limits anyway, but once the summary is made it will stop the search.

I also want to bring up the sentience question, because despite what you said earlier, I think it's very much part of the conversation. We usually understand the term "AI" to mean that the machine is aware; conscious. Otherwise what's the difference between an AI and a complex algorithm like the sort we already have?

So far is there anything you disagree with?
__________________
Master of the Shining Darkness

"My views are nonsense. So what?" - BobTheCoward



Last edited by Belz...; 3rd April 2019 at 02:20 AM.
Belz... is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 3rd April 2019, 06:26 AM   #325
I Am The Scum
Illuminator
 
I Am The Scum's Avatar
 
Join Date: Mar 2010
Posts: 3,987
I'm posting when I have a chance, so I'm going to address these issues one at a time.
Originally Posted by Belz... View Post
Still, the stamp collector AI is not a general artificial intelligence, which is the topic of the thread. It's got a very specific goal and function, which is why it can be so single-minded. A general AI would have temporary goals, such as "computer, find all available information on supernovae including articles, papers and opinions, collate it, and summarize it for me."
The "general" in Artificial General Intelligence does not refer to the goal, but rather, the scope of the AI's operations. A chess AI, for example, knows only about the rules and objectives of a chess board, and can apply what it learns to very difficult chess problems. Challenge it to a game of Tic Tac Toe, however, and it can't even make a move. An AGI's scope would be the real world. It could apply it's intelligence to common, everyday problems, or very serious world-ending catastrophes.

Giving the AGI one solitary goal does not contradict this definition, provided the AI is applying its calculations to the real world (or a simulation of it).
I Am The Scum is online now   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 3rd April 2019, 07:09 AM   #326
Belz...
Fiend God
 
Belz...'s Avatar
 
Join Date: Oct 2005
Location: In the details
Posts: 83,714
Originally Posted by I Am The Scum View Post
I'm posting when I have a chance, so I'm going to address these issues one at a time.
No problem.

Quote:
The "general" in Artificial General Intelligence does not refer to the goal, but rather, the scope of the AI's operations.
Yes, I wasn't very clear on that. Regardless, the two are related.
__________________
Master of the Shining Darkness

"My views are nonsense. So what?" - BobTheCoward


Belz... is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 3rd April 2019, 02:13 PM   #327
ynot
Philosopher
 
ynot's Avatar
 
Join Date: Jan 2006
Posts: 8,316
Perhaps AI will only be a threat to humanity if it gets corrupted by AE (artificial emotion).
__________________
Paranormal beliefs are knowledge placebos.
Rumours of a godís existence have been greatly exaggerated.
To make truth from beliefs is to make truth mere make-believe.
ynot is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 8th April 2019, 08:26 AM   #328
I Am The Scum
Illuminator
 
I Am The Scum's Avatar
 
Join Date: Mar 2010
Posts: 3,987
The point about fake stamps is not that an AI can't tell the difference between real and fake stamps, but rather, that the distinction between real and fake stamps would either be overlooked or ill defined in the AI's programming.

If you think that's an easy thing to fix, then give it a shot. How do you tell the AI what a fake stamp is, while still accounting for things like postage printed from home?

Again, the point of all this is not to say that the problems are insurmountable, but that they are far more challenging than they initially seem. If, a month ago, I had asked you to list the most severe consequences of a stamp collecting AI, you likely would not have said, "It's going to use all manner of fraud and deceit and hacking to achieve its goal, and its going to do it better than any human ever could."

The concern over AI danger can be broken down into three categories:

1. Severe danger that is apparent to any human. (Giving a computer access to nuclear weapons)
2. Severe danger that seems completely harmless to almost anyone. (Stamp collecting)
3. Severe danger that seems completely harmless to the smartest person on earth, but actually contains an obscure flaw that AI could easily take advantage of. (We can't even imagine what this is)
I Am The Scum is online now   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 8th April 2019, 08:47 AM   #329
Belz...
Fiend God
 
Belz...'s Avatar
 
Join Date: Oct 2005
Location: In the details
Posts: 83,714
Originally Posted by I Am The Scum View Post
The point about fake stamps is not that an AI can't tell the difference between real and fake stamps, but rather, that the distinction between real and fake stamps would either be overlooked or ill defined in the AI's programming.
I understand that, but that still means that the AI is not doing its job well, which falls back on its creator.

Quote:
If you think that's an easy thing to fix, then give it a shot. How do you tell the AI what a fake stamp is, while still accounting for things like postage printed from home?
Well, I'm not that awesome of a programmer, but I figure an AI would learn like a human learns, only faster and better.

Quote:
Again, the point of all this is not to say that the problems are insurmountable, but that they are far more challenging than they initially seem.
Oh, no disagreement there. AI is a tough nut, cookie, wafer and cake to crack.

Quote:
The concern over AI danger can be broken down into three categories:

1. Severe danger that is apparent to any human. (Giving a computer access to nuclear weapons)
2. Severe danger that seems completely harmless to almost anyone. (Stamp collecting)
3. Severe danger that seems completely harmless to the smartest person on earth, but actually contains an obscure flaw that AI could easily take advantage of. (We can't even imagine what this is)
I do understand those concerns, but my general point in this thread has been that those concerns often stem from a misunderstanding of what AI can and would be expected to be like. At best, they are appeals to ignorance because the people making them simply don't know what an AI would be like.

Like, how do we even define 'AI'? Does it automatically imply a form of self-awareness? Or just a really complex and changing algorithm that acts in a way similar to what we call intelligence? If it's the latter, its limits depends on physical considerations as well as how constrained the changes to the algorithm are. If it's the former instead, well, that's more unknown territory, but if we're smart enough not to give it simulated or real emotions, and the AI is still physically and logically constrained, it shouldn't be a major threat to anyone.
__________________
Master of the Shining Darkness

"My views are nonsense. So what?" - BobTheCoward


Belz... is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 8th April 2019, 08:50 AM   #330
theprestige
Penultimate Amazing
 
Join Date: Aug 2007
Posts: 36,795
It's like nuclear power, or internal combustion engines. Sooner or later we're going to come up with something super awesome and useful, but that requires a quantum leap forward in technological prowess. Nano-engineering maybe, or fusion plasma fluctuation control, or centrally managing the global economy.

Super useful stuff, but it requires an unprecedented level of cognitive oversight to manage such a complex and constantly-evolving problem space. So just like we built wondrous new machines, and invented whole new fields of technological innovation, just to have nuclear reactors and supersonic airplanes, we'll start building computers capable of realizing our grand new visions.

And then it'll be all over but the screaming.

And then the screaming will be over, too.
theprestige is online now   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 8th April 2019, 03:04 PM   #331
caveman1917
Philosopher
 
Join Date: Feb 2015
Posts: 6,490
Originally Posted by I Am The Scum View Post
The point about fake stamps is not that an AI can't tell the difference between real and fake stamps, but rather, that the distinction between real and fake stamps would either be overlooked or ill defined in the AI's programming.

If you think that's an easy thing to fix, then give it a shot. How do you tell the AI what a fake stamp is, while still accounting for things like postage printed from home?

Again, the point of all this is not to say that the problems are insurmountable, but that they are far more challenging than they initially seem. If, a month ago, I had asked you to list the most severe consequences of a stamp collecting AI, you likely would not have said, "It's going to use all manner of fraud and deceit and hacking to achieve its goal, and its going to do it better than any human ever could."

The concern over AI danger can be broken down into three categories:

1. Severe danger that is apparent to any human. (Giving a computer access to nuclear weapons)
2. Severe danger that seems completely harmless to almost anyone. (Stamp collecting)
3. Severe danger that seems completely harmless to the smartest person on earth, but actually contains an obscure flaw that AI could easily take advantage of. (We can't even imagine what this is)
If you can build an AGI then you can certainly teach it the difference between fake and real stamps. In all of this that one seems to be one of the easier problems, we could probably even do that right now without needing a full AGI.
__________________
"Ideas are also weapons." - Subcomandante Marcos
"We must devastate the avenues where the wealthy live." - Lucy Parsons
"Let us therefore trust the eternal Spirit which destroys and annihilates only because it is the unfathomable and eternal source of all life. The passion for destruction is a creative passion, too!" - Mikhail Bakunin
caveman1917 is online now   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Reply

International Skeptics Forum » General Topics » Religion and Philosophy

Bookmarks

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump


All times are GMT -7. The time now is 05:47 PM.
Powered by vBulletin. Copyright ©2000 - 2019, Jelsoft Enterprises Ltd.

This forum began as part of the James Randi Education Foundation (JREF). However, the forum now exists as
an independent entity with no affiliation with or endorsement by the JREF, including the section in reference to "JREF" topics.

Disclaimer: Messages posted in the Forum are solely the opinion of their authors.