|
Welcome to the International Skeptics Forum, where we discuss skepticism, critical thinking, the paranormal and science in a friendly but lively way. You are currently viewing the forum as a guest, which means you are missing out on discussing matters that are of interest to you. Please consider registering so you can gain full use of the forum features and interact with other Members. Registration is simple, fast and free! Click here to register today. |
![]() |
#41 |
Philosopher
Join Date: Mar 2012
Location: Rochester, NY
Posts: 6,907
|
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#42 |
Observer of Phenomena
Pronouns: he/him Join Date: Feb 2005
Location: Ngunnawal Country
Posts: 76,351
|
|
__________________
Слава Україні Героям слава |
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#43 |
Penultimate Amazing
Join Date: Mar 2009
Posts: 20,088
|
1. Making it not just have the human brain power, but be, say, an order of magnitude faster for that may still be some time away.
2. The point isn't just how fast your brain or the AI works. The point is that you have to experience a LOT of reality for that model to click into place. Like, you actually need to see people and things come and go for like two years before the brain figures out that they still exist when you don't see them. (Like, that mom doesn't actually cease to exist when she covers her face with her hands when playing peek-a-boo.) You need to actually talk to people a lot, for several years, to figure out that they don't know the same things you do and don't see the world from your position. Etc. That's the problem I see with basically just emulating a human brain and letting it learn: All those consecutively better world models come from actually getting that kind of experience. You actually need that kind of experience. Now I suppose you could put it in a faster simulation of RL, but then if you don't already have a human-like AI in the first place, those simulated people might not say the same things as real ones. So you might end up with a model that just fits the simulation, but not the real world. Like, if I were an AI and learning off Skyrim NPCs, my model might end up being that everyone do know every relevant thing I've done even if they weren't seeing it, which is the polar opposite of what one of the Piaget stages are about. Or I might learn that people only react to what I said directly to them, even if the other guy I told a different lie to is like 1m away. |
__________________
Which part of "Ph'nglui mglw'nafh Cthulhu R'lyeh wgah'nagl fhtagn" don't you understand? |
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#44 |
Penultimate Amazing
Join Date: Jun 2003
Posts: 51,332
|
Sure. But the human brain has limited data input bandwidth as well as limited processing power. A machine can be fed information much faster. The amount of data may be equivalent to, say, years of HD video input, but that doesn’t mean it takes years for your computer to load and process it.
The other thing you can do with machines but not people is parallelize it. It can be interacting with, say, 100 people at a time to learn from them. The limits on how fast we can do things will not be the limits on how fast an AI can do things. Let me reiterate that I don’t think we will ever make true strong AI. I think it’s just too complicated for us to figure out. But the obstacle isn’t insufficient time. |
__________________
"As long as it is admitted that the law may be diverted from its true purpose -- that it may violate property instead of protecting it -- then everyone will want to participate in making the law, either to protect himself against plunder or to use it for plunder. Political questions will always be prejudicial, dominant, and all-absorbing. There will be fighting at the door of the Legislative Palace, and the struggle within will be no less furious." - Bastiat, The Law |
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#45 |
Penultimate Amazing
Join Date: Mar 2009
Posts: 20,088
|
Well, it kinda is at the time. It may not be in the future, but it is now.
|
__________________
Which part of "Ph'nglui mglw'nafh Cthulhu R'lyeh wgah'nagl fhtagn" don't you understand? |
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#46 |
Penultimate Amazing
Join Date: Nov 2006
Posts: 13,487
|
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#47 |
Penultimate Amazing
Join Date: Aug 2007
Location: The Antimemetics Division
Posts: 58,550
|
Depends on how exactly the enlightened state is realized.
In a world where Natural Intelligence exists, what will stop us from doing this once with one person and then copying the results into all subsequent persons to skip the training period? Turns out that copying the exact state of the electrochemical soup that represents the enlightened state is a problem that stops us cold. And that's before we even get to the problems of how to induce that in another brain without killing its owner. |
__________________
There is no Antimemetics Division. |
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#48 |
Philosopher
Join Date: Mar 2012
Location: Rochester, NY
Posts: 6,907
|
In the webcomic I mentioned, despite AIs being recognized as citizens, nobody knows how AIs are actually created. The scientist who is considered "the father of AI" doesn't know. The AIs themselves don't know. It's known that if you perform actions "A", "B", and "C" under conditions "X", "Y", and "Z", a sapient consciousness emerges, but nobody can figure out why.
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#49 |
Penultimate Amazing
Join Date: Mar 2009
Posts: 20,088
|
|
__________________
Which part of "Ph'nglui mglw'nafh Cthulhu R'lyeh wgah'nagl fhtagn" don't you understand? |
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#50 |
Penultimate Amazing
Join Date: Aug 2007
Location: The Antimemetics Division
Posts: 58,550
|
|
__________________
There is no Antimemetics Division. |
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#51 |
Penultimate Amazing
Join Date: Mar 2009
Posts: 20,088
|
Someone would have to be pretty bloody stupid to make a computer that fundamentally doesn't allow backups. But then I suppose the idiocracy is bound to happen sooner or later.
|
__________________
Which part of "Ph'nglui mglw'nafh Cthulhu R'lyeh wgah'nagl fhtagn" don't you understand? |
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#52 |
Penultimate Amazing
Join Date: Aug 2007
Location: The Antimemetics Division
Posts: 58,550
|
We're not talking about a computer. We're talking about an intelligence. For all we know, it may be impossible to implement intelligence in such a way that the intelligent state can be copied as such. In human brains, the memory store and the instruction processor are tightly coupled in a feedback loop that cannot be broken without destroying the intelligence. It might turn out that's the only way to do it.
|
__________________
There is no Antimemetics Division. |
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#53 |
Penultimate Amazing
Join Date: Nov 2006
Posts: 13,487
|
Yeah. The human brain doesn't allow for backups. If we produce AIs using something similar there may not be a backup/copy mechanism.
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#54 |
Penultimate Amazing
Join Date: Mar 2009
Posts: 20,088
|
The human brain is basically a self-rewiring FPGA. Or rather, an imperial butt-load of FPGAs (the neural columns) around a massive bandwidth hub.
Yes, biology never needed to evolve a way to backup that FPGA, but it turns out we know how to backup and restore a FPGA we make ourselves. As in literally, we can backup and restore the "wiring" between those gates. And there's no reason we wouldn't come up with a way to do it if it's some different variation on that theme, even if it might involve a bit more circuitry for that, even if nature never needed such. So I repeat myself: someone would have to be bloody stupid to come up with one that fundamentally can't be backed up. And not just one engineer has to wake up with an idea like, "hey, let's ditch backups and lose years of work if a lightning strikes". The whole chain of command above him has to be ok with that idea. It COULD happen, but as I was saying, at that point you can know that the idiocracy is here. |
__________________
Which part of "Ph'nglui mglw'nafh Cthulhu R'lyeh wgah'nagl fhtagn" don't you understand? |
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#55 |
Penultimate Amazing
Join Date: Nov 2006
Posts: 13,487
|
How do you know the brain is an FPGA (or that that is the sufficient part to make it intelligent)? How do you know we can make an FPGA we build intelligent? How do you know that the first way we build an artificial intelligence will not be harnessing a process analogous to biology that also doesn't include a backup mechanism as a goal?
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#56 |
Penultimate Amazing
Join Date: Mar 2009
Posts: 20,088
|
I don't know if we'll decide to not include a backup mechanism. Just, as I was saying, I know that if anyone decides to sink billions into something that can be gone in one power surge or whatever, then we have finally reached the idiocracy point.
|
__________________
Which part of "Ph'nglui mglw'nafh Cthulhu R'lyeh wgah'nagl fhtagn" don't you understand? |
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#57 |
Penultimate Amazing
Join Date: Nov 2006
Posts: 13,487
|
You're missing the point.
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#58 |
Penultimate Amazing
Join Date: Mar 2009
Posts: 20,088
|
Am I? I think I'm presenting a valid business point. If the risk vs reward is too high and you have no ways to mitigate that risk, you don't invest in whatever it is.
And I don't just mean the risk of not ending up with a good AI. I also mean the risk of losing whatever business data you had in it, and everything. It's the kind of thing that sinks your whole business. Or if you want to sell it, now you have to convince the buyers it's ok to pay for something that could just disappear with their whole business data. Same deal. I'm fairly confident that when that happens, I'll just mark it as idiocracy day on my calendar. |
__________________
Which part of "Ph'nglui mglw'nafh Cthulhu R'lyeh wgah'nagl fhtagn" don't you understand? |
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#59 |
Penultimate Amazing
Join Date: Aug 2007
Location: The Antimemetics Division
Posts: 58,550
|
|
__________________
There is no Antimemetics Division. |
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#60 |
Penultimate Amazing
Join Date: Aug 2007
Location: The Antimemetics Division
Posts: 58,550
|
|
__________________
There is no Antimemetics Division. |
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#61 |
Penultimate Amazing
Join Date: Aug 2007
Location: The Antimemetics Division
Posts: 58,550
|
Why? We hire specialists and experts all the time, in the full knowledge that if they get hit by a bus... We'll be okay.
We may discover that while artificial expert systems of the same caliber cannot be backed up one restored, they can be trained to maturity much faster and with much more consistent results. We may decide this is worth investing in. What is your evidence that AI will be restorable from backup? Business convenience? That's exactly my evidence for why nuclear power doesn't produce radioactive waste. |
__________________
There is no Antimemetics Division. |
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#62 |
Penultimate Amazing
Join Date: Mar 2009
Posts: 20,088
|
The difference is that you didn't invest billions in that guy and train him yourself from scratch for that position, like will be the case with the first AI.
|
__________________
Which part of "Ph'nglui mglw'nafh Cthulhu R'lyeh wgah'nagl fhtagn" don't you understand? |
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#63 |
Penultimate Amazing
Join Date: Nov 2006
Posts: 13,487
|
But civilization has invested billions in the first of a lot of things.
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#64 |
Penultimate Amazing
Join Date: Aug 2007
Location: The Antimemetics Division
Posts: 58,550
|
|
__________________
There is no Antimemetics Division. |
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#65 |
Lackey
Administrator
Join Date: Aug 2001
Location: South East, UK
Posts: 101,791
|
Why wouldn't they? If they are based on physical processes using electronics we will be able to back them up and restore them. I do agree if we have to go down to a more say biological process, so we have "self altering" elements - like cells in a brain then we may not be able to backup and restore as easily or more accurately to the level of fidelity required to reproduce the "intelligence" we need.
However, there is nothing in principle that I know of that means that if you duplicate my brain to the nth degree that it would not give you another version of me so I don't know why we wouldn't be able to backup and restore. One way around the problem of a "back up and restore" approach is not to have only one "intelligence" being fed the same input and go to a type of RAID array for the intelligence. Then if one fails you switch the output to another one. (See Saberhagen's Berserkers for RAID arrays of intelligence. ![]() |
__________________
I wish I knew how to quit you |
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#66 |
Penultimate Amazing
Join Date: Nov 2006
Posts: 13,487
|
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#67 |
Penultimate Amazing
Join Date: Nov 2006
Posts: 13,487
|
I don't think anyone is arguing that an identical copy (including not just position but current velocity of all components) wouldn't be identical. I'm certainly not. I'm claiming the first one we build isn't necessarily going to be built on a RAID array or any other particular well specified thing you can think of.
If we build it in some way that more resembles the biological process of evolution we may not get something that can be backed up. The evolutionary process might lose the goal of taking a backup somewhere a long the way and wind up being just like our current brains, no current way to produce a copy/backup. |
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#68 |
Penultimate Amazing
Join Date: Mar 2009
Posts: 20,088
|
Ok, let's start again. We may not know entirely how the processing works, but we know how the information is stored: as the strength of synapses. It's ultimately no different than how information is stored in a QLC flash cell (and with comparable number of states), just in this case it's interspersed along the way between the processing units.
When a signal comes, you basically get a probability that the synapse will trigger, based on that stored strength. (And we're talking quantum level probability, which nixes any idea of determinism.) If it does, the strength goes up. If it's never used, after a while that strength goes down. But essentially each neural column IS a self-rewiring FPGA, just each connection has a small memory cell. Or technically a self-incrementing/decrementing register and a random number generator rolled in one. But for the purpose of a backup, let's focus on the memory cell. Now nature never needed to back that up. So in fact, not only it can't do that, but it has no way of even getting the synapse strength information per se. There is no way for the brain to even know stuff like "that synapse is at strength 0.3". You just send a signal down the line, and maybe nothing triggers. You don't know if it means there's no connection at all (strength 0) or it was nearly max strength, but you rolled a natural 1 so to speak, or what. Just that connection never triggers. Or conversely it does give the signal to the next processing units, but you don't know if it was a full strength connection, or it was a 0.1 strength and you rolled the lucky dice, or what. So yes, the brain is fundamentally built in a way that can't be backed up. (Sorry fans of Upload, digital rapture, etc.) Because evolution never needed to even be able to access the raw state, it just needed something that does the job. HOWEVER, there is no reason -- IF we go the way of emulating a brain in silicon, which is a big IF -- that we can't directly access that memory cell to back it up. We can have a different grid on top of the FPGA that basically lets you access it like it's a flash grid. We're at the point of making 3D chips anyway. Making the FPGA only in the odd rows and the read/write grid in the even rows would be fairly trivial. And that's just if we emulate it in hardware. But there's nothing to say we can't eventually emulate it in software, if we overshoot the necessary programming power enough. The brain may have a LOT of neurons, and each may need a LOT of transistors to emulate in hardware (not the least for that synapse functionality), but each of them is incredibly slow by silicon standards. There's no reason to assume we can't just have a CPU simulating a thousand of them for each pass, and still come out ahead. In which case, meh, the information is in RAM anyway. Most of the problem is really: we're nowhere near having that kind of hardware power yet. |
__________________
Which part of "Ph'nglui mglw'nafh Cthulhu R'lyeh wgah'nagl fhtagn" don't you understand? |
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#69 |
Penultimate Amazing
Join Date: Mar 2009
Posts: 20,088
|
You'd be surprised how many military decisions are ultimately business decisions. Because you have to get funding from Congress, some private companies have to bid, some lobbyists get involved, etc.
In fact, I can't think of any military hardware decision in the last couple of centuries that didn't go some form of that or another. |
__________________
Which part of "Ph'nglui mglw'nafh Cthulhu R'lyeh wgah'nagl fhtagn" don't you understand? |
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#70 |
Penultimate Amazing
Join Date: Aug 2007
Location: The Antimemetics Division
Posts: 58,550
|
It's not clear to me that a purely transistorized, logic-gate approach will produce the kind of intelligence we're talking about, or that it could be backed up even so.
Hans is assuming that it must be so. I think it's far too early in our investigation of the possibilities to assume that. |
__________________
There is no Antimemetics Division. |
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#71 |
Penultimate Amazing
Join Date: Aug 2007
Location: The Antimemetics Division
Posts: 58,550
|
I tend to think of military decisions as being somewhat the opposite of business decisions: It's amazing what you can accomplish when you don't have to make money for shareholders, you just have to get something done by whatever means at your disposal. Helicopters? Ridiculously inefficient, unless you have money to burn, and/or you absolutely need that sweet vertical takeoff and landing. Submarines? Ridiculously inefficient. But fantastic if you need a second-strike nuclear capability that's almost impossible to intercept. Yes, there's always a budget question of cost-effectiveness, but warfare isn't really like a business at all.
On the other hand, von Clausewitz argues that of the human activities of science, art, and commerce, waging war more closely resembles commerce than either of the other two. Make of that what you will. |
__________________
There is no Antimemetics Division. |
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#72 |
Penultimate Amazing
Join Date: Aug 2007
Location: The Antimemetics Division
Posts: 58,550
|
It may turn out that emulating a brain in transistorized hardware just isn't possible. Even if it checks out in theory, the material requirements may well be prohibitive.
Relevant xkcd. So okay, I'll grant that maybe a matryoskha brain around a neutron star could emulate in pure hardware. But what would backing that up even look like? |
__________________
There is no Antimemetics Division. |
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#73 |
Penultimate Amazing
Join Date: Mar 2009
Posts: 20,088
|
There's a difference between something being a business decision and it being run like a business or having to turn a profit.
Besides, even in normal companies a lot of decisions are about risk management, i.e., avoiding a loss, rather than purely increasing profit this quarter. Since we were talking backups, they're the perfect example of that. |
__________________
Which part of "Ph'nglui mglw'nafh Cthulhu R'lyeh wgah'nagl fhtagn" don't you understand? |
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#74 |
Penultimate Amazing
Join Date: Mar 2009
Posts: 20,088
|
|
__________________
Which part of "Ph'nglui mglw'nafh Cthulhu R'lyeh wgah'nagl fhtagn" don't you understand? |
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#75 |
Illuminator
Join Date: Oct 2009
Posts: 4,391
|
In a thread devoted to wild speculation, we might as well consider the possibility that advances in quantum computing allow us to emulate a brain in quantum hardware before we are able to do so using discrete logic.
The no-cloning theorem tells us we can't back up the quantum state of a quantum computer. |
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#76 |
Penultimate Amazing
Join Date: Aug 2007
Location: The Antimemetics Division
Posts: 58,550
|
|
__________________
There is no Antimemetics Division. |
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#77 |
Not a doctor.
Join Date: Jun 2009
Location: Texas
Posts: 24,928
|
My issue here is that you can’t have a single AI and then test it against a single invention. My experience is that inventors are generally creative or generally of great intellect. Instead, most of them seem to be specifically creative and specifically smart about the same are they are specifically creative in.
I’ve had inventors with multiple patents in a field who couldn’t even understand some bobble on my desk that was a working model of another client’s patented invention. They seem to be masters of deep knowledge rather than broad knowledge. I realize you are proposing deep knowledge on all topics, thus giving the ai both deep knowledge and broad knowledge. But sometimes I wonder if the narrow deep knowledge somehow helps to focus their creativity. All that musing aside, there are tons of patents filed every year. Many never even issue and yet some five million or so have issued in the US since your proposed date. So many that people are designing so to find out which of them are even worth taking a second look at. |
__________________
Suffering is not a punishment not a fruit of sin, it is a gift of God. He allows us to share in His suffering and to make up for the sins of the world. -Mother Teresa If I had a pet panda I would name it Snowflake. |
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#78 |
Thinker
Join Date: Apr 2016
Posts: 207
|
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#79 | |||
Master Poster
Join Date: Nov 2020
Posts: 2,440
|
FWIW, this shows some big differences between how we think, and AI works:
https://www.youtube.com/watch?v=BS2la3C-TYc
|
|||
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
Bookmarks |
Thread Tools | |
|
|