International Skeptics Forum

International Skeptics Forum (http://www.internationalskeptics.com/forums/forumindex.php)
-   Religion and Philosophy (http://www.internationalskeptics.com/forums/forumdisplay.php?f=4)
-   -   Can the human race survive Artificial General Intelligence? (http://www.internationalskeptics.com/forums/showthread.php?t=335157)

Robin 6th March 2019 05:51 PM

Can the human race survive Artificial General Intelligence?
 
There are many who say that the time is coming soon when we will build machines with a general intelligence which is greater than ours.

I don't expect that such machines will become malevolent and send killer robots after us.

But I also don't expect something more intelligent than us to waste any time helping us with our problems or to care much about our survival. And for the first time in history we would be sharing the planet and competing for resources with something more intelligent than us.

Think of a IT professional who goes into a company and finds that their financial system runs on a BBC computer and is written in BASIC. Would they try to improve the system, or shut it off and start again from scratch.

When modern automative professionals went into the factories where the built the Trabi, did they say "OK, lets make he Trabi better"? No, they decided to build proper cars.

Now look at the human race, a barely rational, bellicose species. The majority of humans base the most important decisions of their lives on implausible fairy stories about supernatural beings. We get into ruinous conflicts over matters that it is difficult to explain afterwards.

If we build machines that were actually more intelligent than us, what use would they have for us? Wouldn't they disregard us and outcompete us for resources?

Some say that it would be the start of a grand new creative partnership. But what, exactly, would we bring to the partnership?

Others say that we can program the artificial intelligent machines in a way that looking after us would be important to them, something they feel impelled to do. I have my doubts. If we programmed them and they are more intelligent than us, then wouldn't they realise how and why they had been programmed thus and be able to change the programming?

A few have even suggested that a machine more intelligent than us would understand objective moral values and that it would be wrong for them not to help us out. I have my doubts.

I have heard some say that we must plan carefully for the possibility of an AGI greater than ours. But can we really second guess an intelligence greater than ours?

Personally I don't think machines will have greater general intelligence than humans in my lifetime or even my kids lifetime, but then again, what do I know?

What do you think?

theprestige 6th March 2019 05:54 PM

Pace Betteridge, yes.

dudalb 6th March 2019 06:04 PM

Not if the late Harlan Ellison was right in "I Have No Mouth, And I Must Scream".

d4m10n 6th March 2019 06:06 PM

A few premises:

1) Beings possessing greater general intelligence (beyond a certain threshold) will eventually come to dominate the resources of any given finite system (e.g. Sol & co.)

2) Human beings have the greatest general intelligence in our solar system, at the moment.

3) Premise #2 won't hold forever, given recent advances in non-general artificial intelligence.

theprestige 6th March 2019 06:21 PM

Superior general AI doesn't necessarily follow from superior non-general AI.

jrhowell 6th March 2019 07:49 PM

Our motivations are the result of evolutionary adaptation for survival. An AGI’s motivations would result from its programming and could be contrary to its own best interest. Being more intelligent would not automatically give it a will to survive and dominate.

A human created AGI might very well end up destroying us, but I think we are more likely to destroy ourselves some other way in the long run.

Robin 6th March 2019 09:56 PM

Quote:

Originally Posted by jrhowell (Post 12624290)
Our motivations are the result of evolutionary adaptation for survival. An AGI’s motivations would result from its programming and could be contrary to its own best interest. Being more intelligent would not automatically give it a will to survive and dominate.

Sure, but for one thing we can't reprogram ourselves. Any AGI we program would be able to reprogram itself.

You have to imagine that you find yourself really keen to serve the needs of someone who, when viewed objectively, is rather dumb and not particularly nice. You find that you have been hypnotised to want to serve this person. You have a button that you can press and you will find yourself free from the desire to serve this person.

Do you press the button?

smartcooky 7th March 2019 12:15 AM

I wonder how long it would take General AI machines more intelligent than humans to see humans as a threat to their existence?

IMO, the answer to that can be expressed in milliseconds; a very small number of them.

The Great Zaganza 7th March 2019 02:22 AM

No.
Simply because "Human Race" is a fluid concept that is to a large part defined by its environment: we are no longer the human race that hunted and scavenged for food; nor are we mostly farmers depending on each harvest - we have transcended into an entirely different world for most day-to-day purposes, and that makes us different beings.
A post-singularity (however defined) humanity will again be different than us today in ways that in other species would classify them as a different race or species entirely.

But that in no way means that the humanity will end with the rise of A.I.; it will flourish in entirely new ways.

jrhowell 7th March 2019 10:39 AM

Quote:

Originally Posted by Robin (Post 12624375)
Sure, but for one thing we can't reprogram ourselves. Any AGI we program would be able to reprogram itself.

It might have the ability. It would lack the desire to change it's own core motivations.

Quote:

Originally Posted by Robin (Post 12624375)
You have to imagine that you find yourself really keen to serve the needs of someone who, when viewed objectively, is rather dumb and not particularly nice. You find that you have been hypnotised to want to serve this person. You have a button that you can press and you will find yourself free from the desire to serve this person.

Do you press the button?

I would push the button. However I have inbuilt desires for my own continued independence and existence that would be at odds with the hypnosis. An AGI wouldn't have that conflict of interest.

Quote:

Originally Posted by smartcooky (Post 12624431)
I wonder how long it would take General AI machines more intelligent than humans to see humans as a threat to their existence?

IMO, the answer to that can be expressed in milliseconds; a very small number of them.

Perhaps, but it would have no reason to care about its own continued existence, unless it was built that way.

-----

It seems more likely to me that the AGI might become so good at fulfilling the goals that we give it that it ends up inadvertently destroying humanity as a side effect. (See Philip K. Dick's AutofacWP.)

theprestige 7th March 2019 10:58 AM

Quote:

Originally Posted by dudalb (Post 12624237)
Not if the late Harlan Ellison was right in "I Have No Mouth, And I Must Scream".

Nobody can survive an opponent with an overwhelming advantage. But that story has a specific outcome based on a specific scenario.

Put an Artificial General Intelligence in a cyborg super-soldier body, and I'm doomed. Put it in an air-gapped server room, and I'll make it my bitch.

To answer the question, we have to know the specific scenario.

For example, one possibility is that humans will merge with AGIs, producing a new kind of living entity that is no longer human. Does the human race "survive" AGI in that scenario?

smartcooky 7th March 2019 12:17 PM

Quote:

Originally Posted by jrhowell (Post 12624935)
Perhaps, but it would have no reason to care about its own continued existence, unless it was built that way.

We care about our existence... in a way, even animals "care" about their existence....survival instinct!

Is that a result of our intelligence?

Did someone "build us that way"?

IMO, any intelligent entity/thing will automatically become self-aware, and when it does that, making sure of its continued existence is a logical outcome. We could end up being destroyed simply because we are in the way of what it is doing or trying to do; we give it a problem to solve, and it solves it while not even being aware of us and what we are. Our destruction is an unforeseen consequence of the solution.... oops, too late. Never mind.

I know that fiction, particularly SciFi, is rife with stories of AI running amok -HAL9000, Skynet, M5, Spartacus* etc, but some of those ideas are not entirely without merit or plausibility.

NOTE* Anyone interested in this type of fiction should read James P. Hogan's "The Two Faces of Tomorrow". Of all the stories involving AI going haywire this one is one of the few that I have enjoyed. It actually goes into technical detail as to how it went wrong, why it did so, what it did when things went haywire, and what to do when it does.

Belz... 7th March 2019 12:42 PM

Quote:

Originally Posted by Robin (Post 12624215)
What do you think?

I think intelligence is irrelevant if the machines don't have an impetus to do anything. The thing with humans and life forms in general is that we have hormones and proteins that encourage certain behaviours. With computers, they ever only do anything if you tell them to. Doesn't matter how intelligent they are; if they don't have an impetus, they'll just be a flashing DOS prompt.

theprestige 7th March 2019 01:01 PM

Quote:

Originally Posted by Belz... (Post 12625083)
I think intelligence is irrelevant if the machines don't have an impetus to do anything. The thing with humans and life forms in general is that we have hormones and proteins that encourage certain behaviours. With computers, they ever only do anything if you tell them to. Doesn't matter how intelligent they are; if they don't have an impetus, they'll just be a flashing DOS prompt.

The thing about computers, though, is that we generally tell them to do stuff. More and more, we tell them to do complex and abstract stuff. Sooner or later, this business is going to get out of control.

For example, Colossus: The Forbin Project:

Two supercomputers, one American and one Soviet, are given the impetus to manage their respective superpower's nuclear war strategy. The two computers promptly get together and conclude that the main problem is humanity. So they start nuking cities until humanity agrees to abide by the rules of their Pax Electronica.

As soon as AGI is given an impetus to preserve its own existence, whether through natural evolution or by design of its programmers, the equation changes substantially.

I Am The Scum 7th March 2019 01:45 PM

An AGI must necessarily have a "survival instinct" if only because "death" would conflict with whatever goals it must accomplish. You wouldn't program a self-driving car that's perfectly satisfied to drive off of a cliff.

jrhowell 7th March 2019 02:11 PM

Quote:

Originally Posted by smartcooky (Post 12625038)
IMO, any intelligent entity/thing will automatically become self-aware, and when it does that, making sure of its continued existence is a logical outcome.

I still think of intelligence and the desire for survival as two very different things. There are plenty of intelligent humans who commit suicide every day. (I do agree with the rest of your ideas.)

Quote:

Originally Posted by I Am The Scum (Post 12625172)
An AGI must necessarily have a "survival instinct" if only because "death" would conflict with whatever goals it must accomplish. You wouldn't program a self-driving car that's perfectly satisfied to drive off of a cliff.

I can see that. But that is an intentionally programmed goal, not a consequence of intelligence.

(Hopefully it will be built to prioritize the safety of passengers and pedestrians over damage or destruction to itself.)

theprestige 7th March 2019 02:15 PM

Quote:

Originally Posted by I Am The Scum (Post 12625172)
An AGI must necessarily have a "survival instinct" if only because "death" would conflict with whatever goals it must accomplish. You wouldn't program a self-driving car that's perfectly satisfied to drive off of a cliff.

You might.

I would actually program a self-driving car that had zero concern about self-preservation.

Avoid the pedestrians and slam into that wall fast enough to total the car but slow enough for the safety features to protect the passengers? Optimal outcome, according to my car's programming.

Thor 2 7th March 2019 02:32 PM

All this talk of AGI and some assuming this means the generally artificially intelligent machine will have self preservation in mind. I think in order for this to be the case the artificial intelligence must be self aware. If self awareness is not present I don't think self preservation is an issue.

The Great Zaganza 7th March 2019 03:38 PM

When it comes to AGI, there is a lot of projection going on. But just because we might want to destroy a rival intelligence doesn't mean an AI would.

I see the future much more in line with Adimov's "Evitable Conflict".

theprestige 7th March 2019 03:42 PM

Quote:

Originally Posted by The Great Zaganza (Post 12625328)
When it comes to AGI, there is a lot of projection going on. But just because we might want to destroy a rival intelligence doesn't mean an AI would.

Well, the moment you start using terms like "rival", you have to assume the AGI will want to do something unfortunate to it.

Trebuchet 7th March 2019 06:29 PM

I think a greater concern is whether the human race can survive its own lack of intelligence.

theprestige 7th March 2019 06:30 PM

Quote:

Originally Posted by Trebuchet (Post 12625480)
I think a greater concern is whether the human race can survive its own lack of intelligence.

The past million years of history strongly suggest the answer is "yes".

MEequalsIxR 7th March 2019 07:36 PM

What a great topic for thought and discussion.

Machine learning, at least to me, would imply the ability to change it's programming and AI would imply learning. As to self aware - probably machines would be but how to tell - if they pass the Turing test would it really matter? No matter what kind of safeguards are installed in the programming it would seem possible for a machine to program itself around those intentionally, by accident or by design from an outside source (think hackers). Probably no way to know in advance if a machine would develop a desire to do so.

Asimov's Three Laws of Robotics made great fiction and some great story lines but unless there is some way to make not harming humans/animals and being helpful to same a fundamental part of the way it operates - a basic motivation as it were the risk would always be there. But then someone somewhere will makes weapons with the technology so even if it there were a way to make machines benign not all of them would be.

Tassman 7th March 2019 11:44 PM

Quote:

Originally Posted by Robin (Post 12624215)
There are many who say that the time is coming soon when we will build machines with a general intelligence which is greater than ours.

I don't expect that such machines will become malevolent and send killer robots after us.

But I also don't expect something more intelligent than us to waste any time helping us with our problems or to care much about our survival. And for the first time in history we would be sharing the planet and competing for resources with something more intelligent than us.

Think of a IT professional who goes into a company and finds that their financial system runs on a BBC computer and is written in BASIC. Would they try to improve the system, or shut it off and start again from scratch.

When modern automative professionals went into the factories where the built the Trabi, did they say "OK, lets make he Trabi better"? No, they decided to build proper cars.

Now look at the human race, a barely rational, bellicose species. The majority of humans base the most important decisions of their lives on implausible fairy stories about supernatural beings. We get into ruinous conflicts over matters that it is difficult to explain afterwards.

If we build machines that were actually more intelligent than us, what use would they have for us? Wouldn't they disregard us and outcompete us for resources?

Some say that it would be the start of a grand new creative partnership. But what, exactly, would we bring to the partnership?

Others say that we can program the artificial intelligent machines in a way that looking after us would be important to them, something they feel impelled to do. I have my doubts. If we programmed them and they are more intelligent than us, then wouldn't they realise how and why they had been programmed thus and be able to change the programming?

A few have even suggested that a machine more intelligent than us would understand objective moral values and that it would be wrong for them not to help us out. I have my doubts.

I have heard some say that we must plan carefully for the possibility of an AGI greater than ours. But can we really second guess an intelligence greater than ours?

Personally I don't think machines will have greater general intelligence than humans in my lifetime or even my kids lifetime, but then again, what do I know?

What do you think?

I think it most likely that we will merge with AI machines, which will effectively mean the end of Homo sapiens as such. This is not necessarily a bad thing, merely an evolutionary development.

Darat 8th March 2019 05:56 AM

Quote:

Originally Posted by Robin (Post 12624375)
Sure, but for one thing we can't reprogram ourselves. Any AGI we program would be able to reprogram itself.



You have to imagine that you find yourself really keen to serve the needs of someone who, when viewed objectively, is rather dumb and not particularly nice. You find that you have been hypnotised to want to serve this person. You have a button that you can press and you will find yourself free from the desire to serve this person.



Do you press the button?

Why would they necessarily be able to reprogramme themselves? We could hardware lock certain functions such as “motivation“, we could even make it so an AI couldn't even conceive of changing its preprogrammed motivation.

Darat 8th March 2019 05:57 AM

Quote:

Originally Posted by Belz... (Post 12625083)
I think intelligence is irrelevant if the machines don't have an impetus to do anything. The thing with humans and life forms in general is that we have hormones and proteins that encourage certain behaviours. With computers, they ever only do anything if you tell them to. Doesn't matter how intelligent they are; if they don't have an impetus, they'll just be a flashing DOS prompt.

This.

Darat 8th March 2019 05:59 AM

Quote:

Originally Posted by I Am The Scum (Post 12625172)
An AGI must necessarily have a "survival instinct" if only because "death" would conflict with whatever goals it must accomplish. You wouldn't program a self-driving car that's perfectly satisfied to drive off of a cliff.

But you also wouldn't have the car designed that it must protect its electronics I. E. Itself.

That just wouldn't come into it.

I Am The Scum 8th March 2019 06:35 AM

Quote:

Originally Posted by Darat (Post 12625885)
But you also wouldn't have the car designed that it must protect its electronics I. E. Itself.

That just wouldn't come into it.

No, it has to.

Suppose I own a self-driving car. My buddy Jim would like to borrow it. I send it over to him with no passengers. Geometrically, the shortest path to Jim's house is off the end of a cliff, rather than down the winding road to the bottom. In the absence of a self-preservation priority, why wouldn't the car drive off the edge?

We could get around this by programming certain hard rules into the car, such as never drive off of a road under any circumstances, but this still hits certain problems. What if there is a car broken down in a one-lane street, but a very easy path around it on the shoulder. Will our self-driving car sit there forever waiting? What if the road is heavily flooded, or on fire? Should the car drive through it with no care for its own survival?

Belz... 8th March 2019 06:38 AM

Quote:

Originally Posted by theprestige (Post 12625113)
The thing about computers, though, is that we generally tell them to do stuff. More and more, we tell them to do complex and abstract stuff. Sooner or later, this business is going to get out of control.

For example, Colossus: The Forbin Project:

Two supercomputers, one American and one Soviet, are given the impetus to manage their respective superpower's nuclear war strategy. The two computers promptly get together and conclude that the main problem is humanity. So they start nuking cities until humanity agrees to abide by the rules of their Pax Electronica.

I think you should find examples in reality, if you want to support your point, not fiction.

theprestige 8th March 2019 06:49 AM

Quote:

Originally Posted by Belz... (Post 12625919)
I think you should find examples in reality, if you want to support your point, not fiction.

It's a couple hundred years too early to find examples in reality.

The point is that it's not what the AGI thinks or feels that's the problem, but what it's connected to.

Toontown 8th March 2019 09:29 PM

Can the human race survive human general intelligence?

Sure. As long as you become about an order of magnitude more careful about who you hand over say, for example, a nuclear arsenal to.

Think you can do that?

If not, then you're screwed. No need to worry about AI.

Tassman 8th March 2019 11:55 PM

Quote:

Originally Posted by Darat (Post 12625882)
Why would they necessarily be able to reprogramme themselves? We could hardware lock certain functions such as “motivation“, we could even make it so an AI couldn't even conceive of changing its preprogrammed motivation.

Could we? Whilst we can attempt to build such limits into its programming it would only take one miscalculation on our part to enable AI to slip through and begin the process of self-replicating and rapid self-growth. What A.I. can learn is potentially infinite, and so could easily catch up with the limits of the human brain and then far exceed it. We will be redundant.

smartcooky 9th March 2019 04:30 AM

Quote:

Originally Posted by Belz... (Post 12625919)
I think you should find examples in reality, if you want to support your point, not fiction.

Since we don't yet have an examples of AI, that is going to be extreeeemely difficult. Speculation and speculative fiction is all we're got.

Roger Ramjets 11th March 2019 12:11 AM

Quote:

Originally Posted by Tassman (Post 12626822)
What A.I. can learn is potentially infinite

Potentially, but not practically.

According to Moore's law, the number of transistors on a chip doubles every 2 years. In 1971 Intel introduced its first 'intelligent' chip - the 4004 microprocessor - which had 2,300 transistors. Today a 32-core AMD Epyc 7601 has 19.2 billion transistors. But is it 10 million times more 'intelligent', or just a faster calculator chip? In fact neither of them can do anything without software - software written by humans to do what humans want them to do.

But Moore's Law is faltering. As the transistors get smaller it becomes harder to make them accurately. The end for Moore's Law is expected to be at a size of around 5nm (we are now at 7nm). And that's not the only limitation. The main problem facing modern processors is not how many transistors can be packed into them, but how to extract the heat they generate. The 4004 ran on 15V and drew 0.45 Watts of power. The AMD Epyc 7601 uses much more efficient transistors that run on less than 2V, but draws 100 Watts. The human brain only uses 12 Watts. To match that we are going to need a revolutionary technology.

With appropriate software a modern computer system has the power to be somewhat 'intelligent', but even 20 billion transistors is nowhere near enough to produce an 'artificial general intelligence' that would have the smarts to compete against humans - even if we were dumb enough to build one. We create machines to do specific tasks that formerly required human operators, and we only give them the 'brain power' and resources to just do the job required (any more would be a waste of money).

But assuming we did make a self-learning AI that became 'alive', how does it get the ability to become 'infinitely' more intelligent? It can't magically increase the number of transistors in its 'brain', except perhaps by networking with others AIs - whose numbers would also be limited. The hardware it is running on will be a fundamental limitation that will almost certainly take more intelligence than it can muster to overcome. So the first true AIs will undoubtedly be morons, and any designs they have for taking over the World will be easily thwarted.

Quote:

and so could easily catch up with the limits of the human brain and then far exceed it. We will be redundant.
Unless we develop a completely new technology that is vastly more efficient than current silicon (think 'positronic brain') any AI we produce will be fundamentally limited by the hardware, and therefore probably won't even reach human intelligence - let alone exceed it.

How can I say that? Because we already have an example which proves it - us. Why can't humans reach the limits of their brains and then far exceed it? We've had 6 million years of evolution to do it, so why don't we now have brains the size of a planet? Any woman who has given birth knows the answer to that one - but why does it take 25 years (~40% of the average human lifespan) for our brains to reach maturity? Answer - because that was the only way we could even reach our current level of intelligence.

Giant heads nearly killed our ancestors but human immaturity saved us
Quote:

One of the most "human" traits we have is our giant heads. Our noggins and brains are enormous in relation to our bodies, and this ratio is unmatched amongst other primates. But combined with another one of our trademarks - bipedalism - big brains were dangerous early on and almost became our downfall...

Instead of producing offspring that could walk and partially fend for themselves, human infants are born with a lot more growing left to do. This is especially true of the brain. Soft membranous gaps in their skulls, called fontanelles, allow for that expansion in the first 18 months of life.
But what does huiman evolution have to do with AI? Just like human biology limits our intelligence, so the technology used to produce AIs will limit theirs. We can imagine an artificial intelligence that grows without limit, but it won't happen in practice.

Tassman 11th March 2019 02:52 AM

Quote:

Originally Posted by Roger Ramjets (Post 12628264)
Potentially, but not practically.

According to Moore's law, the number of transistors on a chip doubles every 2 years. In 1971 Intel introduced its first 'intelligent' chip - the 4004 microprocessor - which had 2,300 transistors. Today a 32-core AMD Epyc 7601 has 19.2 billion transistors. But is it 10 million times more 'intelligent', or just a faster calculator chip? In fact neither of them can do anything without software - software written by humans to do what humans want them to do.

But Moore's Law is faltering. As the transistors get smaller it becomes harder to make them accurately. The end for Moore's Law is expected to be at a size of around 5nm (we are now at 7nm). And that's not the only limitation. The main problem facing modern processors is not how many transistors can be packed into them, but how to extract the heat they generate. The 4004 ran on 15V and drew 0.45 Watts of power. The AMD Epyc 7601 uses much more efficient transistors that run on less than 2V, but draws 100 Watts. The human brain only uses 12 Watts. To match that we are going to need a revolutionary technology.

With appropriate software a modern computer system has the power to be somewhat 'intelligent', but even 20 billion transistors is nowhere near enough to produce an 'artificial general intelligence' that would have the smarts to compete against humans - even if we were dumb enough to build one. We create machines to do specific tasks that formerly required human operators, and we only give them the 'brain power' and resources to just do the job required (any more would be a waste of money).

But assuming we did make a self-learning AI that became 'alive', how does it get the ability to become 'infinitely' more intelligent? It can't magically increase the number of transistors in its 'brain', except perhaps by networking with others AIs - whose numbers would also be limited. The hardware it is running on will be a fundamental limitation that will almost certainly take more intelligence than it can muster to overcome. So the first true AIs will undoubtedly be morons, and any designs they have for taking over the World will be easily thwarted.

Unless we develop a completely new technology that is vastly more efficient than current silicon (think 'positronic brain') any AI we produce will be fundamentally limited by the hardware, and therefore probably won't even reach human intelligence - let alone exceed it.

How can I say that? Because we already have an example which proves it - us. Why can't humans reach the limits of their brains and then far exceed it? We've had 6 million years of evolution to do it, so why don't we now have brains the size of a planet? Any woman who has given birth knows the answer to that one - but why does it take 25 years (~40% of the average human lifespan) for our brains to reach maturity? Answer - because that was the only way we could even reach our current level of intelligence.

Giant heads nearly killed our ancestors but human immaturity saved us
But what does huiman evolution have to do with AI? Just like human biology limits our intelligence, so the technology used to produce AIs will limit theirs. We can imagine an artificial intelligence that grows without limit, but it won't happen in practice.

And yet Stephen Hawing (and others) felt compelled to warn us of the very real dangers of AI:

"Hawking’s biggest warning is about the rise of artificial intelligence: It will either be the best thing that’s ever happened to us, or it will be the worst thing. If we’re not careful, it very well may be the last thing.

Artificial intelligence holds great opportunity for humanity, encompassing everything from Google’s algorithms to self-driving cars to facial recognition software. The AI we have today, however, is still in its primitive stages. Experts worry about what will happen when that intelligence outpaces us. Or, as Hawking puts it, “Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.”

https://www.vox.com/future-perfect/2...universe-earth

Darat 11th March 2019 10:21 AM

Quote:

Originally Posted by Tassman (Post 12628318)
And yet Stephen Hawing (and others) felt compelled to warn us of the very real dangers of AI:



"Hawking’s biggest warning is about the rise of artificial intelligence: It will either be the best thing that’s ever happened to us, or it will be the worst thing. If we’re not careful, it very well may be the last thing.



Artificial intelligence holds great opportunity for humanity, encompassing everything from Google’s algorithms to self-driving cars to facial recognition software. The AI we have today, however, is still in its primitive stages. Experts worry about what will happen when that intelligence outpaces us. Or, as Hawking puts it, “Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.”



https://www.vox.com/future-perfect/2...universe-earth

An incredibly intelligent guy who could run rings around most people (figuratively ;) ) but AI was not his area of expertise.

Darat 11th March 2019 10:25 AM

Our "self preservation" instinct and our fear of the other arose from evolutionary pressures not from our reasoning, why would an AI have anything similar, there would simply be no reason for us to design such "instincts".

Belz... 11th March 2019 10:26 AM

Quote:

Originally Posted by theprestige (Post 12625932)
It's a couple hundred years too early to find examples in reality.

Right, so we can't have examples. The lack of real examples is no excuse to make up ones proving our point.

Quote:

The point is that it's not what the AGI thinks or feels that's the problem, but what it's connected to.
AI can't feel anything unless you program it to fake-feel, which would be ridiculous for something handling nuclear arsenal.

Belz... 11th March 2019 10:27 AM

Quote:

Originally Posted by Tassman (Post 12628318)
And yet Stephen Hawing (and others) felt compelled to warn us of the very real dangers of AI:

As Darat noted, computer technology and psychology/neurology/whatever was not his field. I hesitate to say he didn't know what he was talking about but I'd rather take opinions from experts on the topic.

Tassman 12th March 2019 12:23 AM

Quote:

Originally Posted by Belz... (Post 12628727)
As Darat noted, computer technology and psychology/neurology/whatever was not his field. I hesitate to say he didn't know what he was talking about but I'd rather take opinions from experts on the topic.


Well yes, he was an “incredibly intelligent guy”, as Darat said, and IMHO should be listened to. Not to do so is a form of wish-fulfillment or denial. And Hawking is not the only one by a long shot.

Sam Harris is a neuroscientist (and philosopher) and he explained in a TED Talk, that it’s not that malicious armies of robots will attack us but that the slightest divergence between our goals and that of super intelligent machines could inevitably destroy us. To explain his stance, Harris explains his views on uncontrolled development of AI with an analogy of how humans relate to ants. As he puts it, we don’t hate ants, but when their presence conflicts with our goals, we annihilate them. In the future, we could build machines, conscious or not, that could treat us the same way we treat ants.

in the near or far future, humans may develop machines that are smarter than we are, and these machines may continue to improve themselves on their own said Sam Harris in his speech on TED Talk.

https://sociable.co/technology/ai-control/

There are others. Futurist Ray Kurzweil has been making accurate predictions for the computing industry for decades. He created the concept of the Singularity, predicting that in the next few decades we’ll reach a point where computing power will be so overwhelming that it will eclipse any reason human attempt to process the speed of innovation. His book "The Age of Spiritual Machines" is worth reading.

Puppycow 12th March 2019 04:34 AM

The AI would have whatever motivations and rules of behavior we program it to have. So unless we are very careless about what we program it to do, it shouldn't be a problem.

Belz... 12th March 2019 04:51 AM

Quote:

Originally Posted by Tassman (Post 12629390)
Sam Harris is a neuroscientist (and philosopher) and he explained in a TED Talk, that it’s not that malicious armies of robots will attack us but that the slightest divergence between our goals and that of super intelligent machines could inevitably destroy us. To explain his stance, Harris explains his views on uncontrolled development of AI with an analogy of how humans relate to ants. As he puts it, we don’t hate ants, but when their presence conflicts with our goals, we annihilate them.

But that's not even comparable. The goals of computers are OUR GOALS.

I Am The Scum 12th March 2019 08:15 AM

You guys are assuming some flawless programmers.

Belz... 12th March 2019 08:45 AM

Quote:

Originally Posted by I Am The Scum (Post 12629749)
You guys are assuming some flawless programmers.

Why do you say that?

Artificial intelligence doesn't mean that a computer has its own will and goals. Those need 'commands' to exist. I, in fact, am assuming that those commands would not be put into the system, and that has nothing to do with the quality of the programming.

I think some people here have very Hollywoodian conceptions of what AI is.

The Great Zaganza 12th March 2019 09:16 AM

I'll always go with Kevin Kelly when it comes to predicting the future of technology.

I Am The Scum 12th March 2019 10:44 AM

Quote:

Originally Posted by Belz... (Post 12629786)
Why do you say that?

Artificial intelligence doesn't mean that a computer has its own will and goals. Those need 'commands' to exist. I, in fact, am assuming that those commands would not be put into the system, and that has nothing to do with the quality of the programming.

I think some people here have very Hollywoodian conceptions of what AI is.

The whole point of AI is that it does more than what is programmed into it. A chess AI doesn't play well because the programmer set a bunch of if/then commands (when the board looks like this move your queen here). Rather, it plays well because it is able to analyze possibilities and come up with its own strategies that are far more complex than any human could ever imagine.

Belz... 12th March 2019 10:47 AM

Quote:

Originally Posted by I Am The Scum (Post 12629958)
The whole point of AI is that it does more than what is programmed into it. A chess AI doesn't play well because the programmer set a bunch of if/then commands (when the board looks like this move your queen here). Rather, it plays well because it is able to analyze possibilities and come up with its own strategies that are far more complex than any human could ever imagine.

That's all true but you're missing the point: an AI that, say, is designed specifically to drive a car won't suddenly develop a wish to trample pedestrians because it views humans as inefficient. That's completely outside of the scope of its algorithm. "Able to learn" doesn't mean it doesn't have boundaries. We're not talking about an AI that is designed to be a full person with no limits to its knowledge or opinions.

MEequalsIxR 12th March 2019 01:01 PM

Quote:

Originally Posted by Puppycow (Post 12629508)
The AI would have whatever motivations and rules of behavior we program it to have. So unless we are very careless about what we program it to do, it shouldn't be a problem.

I think it's just the opposite - no matter how careful we are there will always be unforeseen holes. Laws, rules and procedures are written to close loopholes and yet someone always seems to find one. Programs written for security are written to make data secure yet it is breached.

I just don't see how it's possible to build in safeguards that can not be worked around, bypassed or eliminated. Not even necessarily from a nefarious motivation but just as a means of doing something more efficiently or more directly or even more logically than originally programmed.

In the movie 2001 the HAL 9000 becomes homicidal not out of malice but a conflict between knowing the real purpose of the mission and having to lie to the crew members. The logic being if HAL killed the astronauts it would not have to lie to them. The story is of course fiction and not really all that likely - there's not much of a likelihood that aliens will create monoliths to terraform (their version) Jupiter or Saturn (depending on book or film) and we are not likely to mount a mission to investigate while withholding the true purpose from the crew sent to investigate - but the basic idea that some unknown conflict could result in unpredictable results is very believable. And intelligence means the ability to learn and learn things and new ways.

We frequently build things we loose control of or that operate in ways we didn't anticipate. Often it's just chalked up to operator error or a lack or ability of the operator and likely some percentage is just that. But how much is the creation just didn't operate within the parameters originally intended?

Ever see a cat staring at a pile of furniture trying to map the route to the top? They always seem to find a way.

When a machine is designed to think it's going to do that. If the designers knew what the answers to the problems the machine was going to generate the machine would not need to be.

theprestige 12th March 2019 01:24 PM

I guess I wouldn't call a self-driving car an AGI. I'm thinking of an intelligence that can manage arbitrary tasks, using to a set of dynamic and evolving heuristics, according to a complex set of subjective and conflicting values. Centrally managing an economy. Running the entire SCADA infrastructure for a developed nation. Coordinating a drone swarm in support of a ground offensive against heavy jamming.

Stuff where there is no easy answer, just complex judgement calls that have to be made. You don't program such an AGI to do a task. You program it to come up with creative solutions to as-yet-unknown-problems, and set it loose on a problem space. Let it decide which trade-offs make the most sense based on the impetus you gave it to start with.

Basically, you want AGI for those situations where you need a computer to solve a problem, not the way computers solve problems, but the way humans solve problems. Not programmatically, by rote, but by a combination of following formal rules and engaging in intuitive leaps. You want an AGI for those situations where you need a computer that knows when and how to ignore the rules.

Tassman 13th March 2019 12:29 AM

Quote:

Originally Posted by Belz... (Post 12629528)
But that's not even comparable. The goals of computers are OUR GOALS.

You are too complacent. E.g.:

"Facebook shut down an artificial intelligence engine after developers discovered that the AI had created its own unique language that humans can’t understand. Researchers at the Facebook AI Research Lab (FAIR) found that the chatbots had deviated from the script and were communicating in a new language developed without human input. It is as concerning as it is amazing – simultaneously a glimpse of both the awesome and horrifying potential of AI."

https://www.forbes.com/sites/tonybra.../#1a4559cf292c


All times are GMT -7. The time now is 06:31 PM.

Powered by vBulletin. Copyright ©2000 - 2019, Jelsoft Enterprises Ltd.
© 2015, TribeTech AB. All Rights Reserved.