|
Welcome to the International Skeptics Forum, where we discuss skepticism, critical thinking, the paranormal and science in a friendly but lively way. You are currently viewing the forum as a guest, which means you are missing out on discussing matters that are of interest to you. Please consider registering so you can gain full use of the forum features and interact with other Members. Registration is simple, fast and free! Click here to register today. |
![]() |
#121 |
Scholar
Join Date: Jul 2015
Location: Armenia, Yerevan
Posts: 107
|
|
__________________
Follow those who seek the truth, run away from those who have found it. |
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#122 |
Lackey
Administrator
Join Date: Aug 2001
Location: South East, UK
Posts: 97,133
|
|
__________________
I wish I knew how to quit you |
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#123 |
Scholar
Join Date: Jul 2015
Location: Armenia, Yerevan
Posts: 107
|
Because this raises moral and philosophical questions. Also this will likely complicate our lives since we create some creatures that can have different needs. I think it's better to merge humans and robots to fix our bodies' imperfections than to create separate sentient robots. It's more economically profitable to use robots just as tools.
|
__________________
Follow those who seek the truth, run away from those who have found it. |
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#124 |
Illuminator
Join Date: Sep 2009
Location: Olomouc, Czech Republic
Posts: 3,177
|
Robots are artificial slaves. They were from the beginning (btw. it's 100 years this year). Sentient AI on the other hand would be artifical master.
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#125 |
Lackey
Administrator
Join Date: Aug 2001
Location: South East, UK
Posts: 97,133
|
|
__________________
I wish I knew how to quit you |
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#126 |
Penultimate Amazing
Join Date: Aug 2007
Location: Hong Kong
Posts: 49,693
|
Hubris.
Hubris and laziness. We would develop such an AI for the simple reason that we can (assuming we ever can). Our reach ever exceeds our grasp. And they will become our masters because we're lazy. One of these days we're going to put entire national power grids under the direct control of expert systems because it's easier and cheaper than hiring thousands of highly-trained humans to manually run all that stuff. |
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#127 |
Lackey
Administrator
Join Date: Aug 2001
Location: South East, UK
Posts: 97,133
|
|
__________________
I wish I knew how to quit you |
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#128 |
Penultimate Amazing
Join Date: Aug 2007
Location: Hong Kong
Posts: 49,693
|
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#129 |
Lackey
Administrator
Join Date: Aug 2001
Location: South East, UK
Posts: 97,133
|
|
__________________
I wish I knew how to quit you |
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#130 |
Illuminator
Join Date: Sep 2009
Location: Olomouc, Czech Republic
Posts: 3,177
|
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#131 |
Lackey
Administrator
Join Date: Aug 2001
Location: South East, UK
Posts: 97,133
|
Again why? And why would we create such AIs and put them in control of us?
Perhaps I’m not being clear with my “whys?”. We are going to be the ones creating these AIs, why would we design them so they could harm us or output solutions we don’t want? The AIs that don’t behave as we want will be scrapped or debugged until they do. In regards to them having “intent” or “motivation” if we find we do have to in effect mimic the human neural systems that control and mediate action based on intent, we will - to make them function - have to be supplying the required “inputs” to those structures so we get the right outputs. |
__________________
I wish I knew how to quit you |
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#132 |
Penultimate Amazing
Join Date: Aug 2007
Location: Hong Kong
Posts: 49,693
|
Not in the sense I'm talking about. Humans can reason abstractly from a vast wealth of experience and training, to solve problems that weren't expected and lack a preprogrammed solution. Too, humans can reason morally and emotionally.
Computers can reason faster for a lot of stuff. The kind of system I'm talking about can do both: Reason fast enough to manage a vast and complex system in realtime, make decisions about the human lives that depend on that system, and act on those decisions in the blink of an eye. The real life version of War Games is going to be about SCADA, not NORAD. |
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#133 |
Penultimate Amazing
Join Date: Aug 2007
Location: Hong Kong
Posts: 49,693
|
We'll do it because we're stupid and arrogant and lazy. The immediate benefits of putting important and complex systems under the control of a sophisticated AI will be too attractive to bother figuring out all the possible failure modes and safeguards implied in that decision. We'll take a stab at it, and do pretty well, and fifty years later someone will say, "well, we probably should have thought of that before we put a computerized sociopath in charge of all the important stuff." |
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#134 |
Illuminator
Join Date: Sep 2009
Location: Olomouc, Czech Republic
Posts: 3,177
|
Of course we won't put them in control. But once they are smarter than us, they will put themselves in control. We will try to limit and isolate them .. but it won't work in the end. Humans can't develop as fast as AI. Sooner or later we won't be able to contain it. And we generally don't understand even simple AIs today.
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#135 |
Lackey
Administrator
Join Date: Aug 2001
Location: South East, UK
Posts: 97,133
|
Not going down a rabbit hole and going to ask you what “is” means... But this is a philosophical discussion so we need to make sure we are all using words. In the same way.
You say “smarter” above - yet we are not ruled by the humans we consider to be the “smartest“ why would it be different for AI. Are you using smart in a different way? I would suggest to become a “master” you need more than just being smart, you need to have a want to be master else our smartest humans would always be in charge of the rest of us, and the bigger question is why should the smarter AI have that want? Why would we have built in that “want”? Plus if they are smarter perhaps we should have them in “control”..... |
__________________
I wish I knew how to quit you |
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#136 |
Penultimate Amazing
Join Date: Aug 2007
Location: Hong Kong
Posts: 49,693
|
We've evolved a lot of social safeguards over the centuries, to try to limit the scope of damage a single bad person can do.
None of those safeguards exist for AI yet. Meanwhile the scope of damage that can be done by simple or careless acts has greatly increased. Then, remember stuxnet? It's not even that we'll put AI we don't understand and shouldn't trust in charge of complex SCADA infrastructure. It's that we almost certainly won't adequately ensure that the AIs we develop can't gain access to the infrastructure network on their own. But we will give them access on purpose. Because we're arrogant and stupid and lazy. In a nutshell: Hubris. |
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#137 |
Illuminator
Join Date: Sep 2009
Location: Olomouc, Czech Republic
Posts: 3,177
|
![]()
We will build it out of curiosity. Someone will. And of course we will build AI which 'wants', because we will built AI to learn about human intelligence, software of the brain .. and we do 'want', so obviously we want AI to 'want'.
But while human brain can't be easily extended in memory, speed or input devices, electronic brain can be. And as for memory and speed, it will most likely follow Moore's law. First version will be proof of concept, not real time, easy to watch and control. At some time there will be version will be as good and fast as human. And after few years the next version will be much faster than human, if nothing else. That's what I mean 'smarter'. We haven't met anything smarter yet, so it's hard to predict, what exactly it will look like. Look at animals we exploit. What allows us to do so ? That is what I call being 'smart'. Also yes, if the AI was smarter than us, it could be great leader. But only if we could control its will. And sooner or later, there will be AI which won't be benevolent. It might be even created as evil, but it might also simply not be wise enough. Same way children would torture instects or even small animals, just for fun. |
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#138 |
Master Poster
Join Date: Feb 2010
Posts: 2,448
|
I remember a joke I heard a long time ago. "How can you expect computers to pass the Turing test when there are humans that can't seem to pass it?"
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#139 |
Master Poster
Join Date: Feb 2013
Posts: 2,394
|
As someone who believes we are the ghost in the machine. That is that consciousness is the God created incarnate immortal spirit. I naturally reject AI as an impossibility. But I am also a qualified electronics engineer, trained to repair logic boards and computers.
I know that all electronic devices are simply controlling electron flow, and computers need a programmer to do anything. The illusion of computers being smart is because they are running a program written by humans. Otherwise computers cannot do anything other than add and subtract ones and zeros. The more powerful computers get, the faster they can calculate, and the more they can do. But in the end all they are doing is adding up to one in binary code. I don't care if we build a computer with a brain the size of a planet, Like Marvin, the robot in the hitch hikers guide to the galaxy. It will still be an adding machine and need a human mind to program it. |
__________________
You see many stars in the sky at night, but not when the sun rises. Can you therefore say there are no stars in the heavens during the day? O man because you cannot find God in the days of your ignorance, say not that there is no God. Sri Ramakrishna Even in the valley of the shadow of death two and two do not make six. Leo Tolstoy |
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#140 |
Illuminator
Join Date: Sep 2009
Location: Olomouc, Czech Republic
Posts: 3,177
|
Oh, lucky you. I'm on the other side. We are totally just software running in the brains. Randomly evolved software. What scares me is I think the core functionality of consciousness, while very different from normal computers, and even most AI approaches, is not that complicated. I think today's computers are quite strong enough to be conscious.
Will the first conscious computer be programmed by human ? Who knows. Maybe we will evolve them artificially, which I wouldn't call programming them. Or we will emulate human (animal) brain, which I wouldn't call programming them either. Or we will at some day understand what consciousness is all about and program them in much simpler way. In any case, computer will be better host for software, because computers can be easily upgraded, unlike human brains. And that's where I think lies the danger. |
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#141 | |||
Observer of Phenomena
Pronouns: he/him Join Date: Feb 2005
Location: Ngunnawal Country
Posts: 70,315
|
Watch this video. It's 8:54 long, safe for work.
The tl;dw is that computers are now doing things, and we have no idea how they're doing them. They are no longer simple adding machines. They are programming themselves. They are learning. This is not, in itself, indicative of artificial general intelligence. It's a step along the way, but by no means the only necessary step. Regardless, your view of computers is obsolete. Personally, I believe that artificial general intelligence (AGI) is inevitable. If it does not emerge spontaneously, I think that humans will create it, just to see if they can. Arguments will then be had over whether these AGIs have rights. I believe that they should. People like Scorpion will then argue vociferously that since machines do not have souls, they do not deserve rights. I believe that opinion should be treated with the contempt it deserves. |
|||
__________________
Please scream inside your heart. |
||||
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#142 |
Philosopher
Join Date: Sep 2006
Posts: 6,892
|
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
Bookmarks |
Thread Tools | |
|
|