IS Forum
Forum Index Register Members List Events Mark Forums Read Help

Go Back   International Skeptics Forum » General Topics » Science, Mathematics, Medicine, and Technology
 


Welcome to the International Skeptics Forum, where we discuss skepticism, critical thinking, the paranormal and science in a friendly but lively way. You are currently viewing the forum as a guest, which means you are missing out on discussing matters that are of interest to you. Please consider registering so you can gain full use of the forum features and interact with other Members. Registration is simple, fast and free! Click here to register today.
Reply
Old 7th October 2021, 08:36 AM   #41
Armitage72
Philosopher
 
Armitage72's Avatar
 
Join Date: Mar 2012
Location: Rochester, NY
Posts: 6,414
Originally Posted by Ziggurat View Post
I suspect (but cannot prove) that we cannot build an AGI with anything close to human intelligence.

I could see uses for an AGI with high animal-level intelligence that could be trained like a smart dog.
Armitage72 is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 7th October 2021, 08:59 PM   #42
arthwollipot
Observer of Phenomena
Pronouns: he/him
 
arthwollipot's Avatar
 
Join Date: Feb 2005
Location: Ngunnawal Country
Posts: 74,413
Originally Posted by Armitage72 View Post
I could see uses for an AGI with high animal-level intelligence that could be trained like a smart dog.
Ooh, now there's a premise for a sci-fi story.
__________________
We are all #KenBehrens
arthwollipot is online now   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 7th October 2021, 11:37 PM   #43
HansMustermann
Penultimate Amazing
 
HansMustermann's Avatar
 
Join Date: Mar 2009
Posts: 19,358
Originally Posted by Ziggurat View Post
I don't think time scales are the problem. We don't know how to make computers smart, but we do know how to make them fast. If we ever figure out how to make them smart, then there's no reason to assume they will have to operate at the same speeds (in terms of thinking OR in terms of learning) as humans do. I don't think we will figure out the smart part, but if I'm wrong about that, time scales probably won't be a problem.
1. Making it not just have the human brain power, but be, say, an order of magnitude faster for that may still be some time away.

2. The point isn't just how fast your brain or the AI works. The point is that you have to experience a LOT of reality for that model to click into place. Like, you actually need to see people and things come and go for like two years before the brain figures out that they still exist when you don't see them. (Like, that mom doesn't actually cease to exist when she covers her face with her hands when playing peek-a-boo.) You need to actually talk to people a lot, for several years, to figure out that they don't know the same things you do and don't see the world from your position. Etc.

That's the problem I see with basically just emulating a human brain and letting it learn: All those consecutively better world models come from actually getting that kind of experience. You actually need that kind of experience.

Now I suppose you could put it in a faster simulation of RL, but then if you don't already have a human-like AI in the first place, those simulated people might not say the same things as real ones. So you might end up with a model that just fits the simulation, but not the real world.

Like, if I were an AI and learning off Skyrim NPCs, my model might end up being that everyone do know every relevant thing I've done even if they weren't seeing it, which is the polar opposite of what one of the Piaget stages are about. Or I might learn that people only react to what I said directly to them, even if the other guy I told a different lie to is like 1m away.
__________________
Which part of "Ph'nglui mglw'nafh Cthulhu R'lyeh wgah'nagl fhtagn" don't you understand?

Last edited by HansMustermann; 7th October 2021 at 11:40 PM.
HansMustermann is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 7th October 2021, 11:48 PM   #44
Ziggurat
Penultimate Amazing
 
Ziggurat's Avatar
 
Join Date: Jun 2003
Posts: 50,028
Originally Posted by HansMustermann View Post
2. The point isn't just how fast your brain or the AI works. The point is that you have to experience a LOT of reality for that model to click into place.
Sure. But the human brain has limited data input bandwidth as well as limited processing power. A machine can be fed information much faster. The amount of data may be equivalent to, say, years of HD video input, but that doesnít mean it takes years for your computer to load and process it.

The other thing you can do with machines but not people is parallelize it. It can be interacting with, say, 100 people at a time to learn from them. The limits on how fast we can do things will not be the limits on how fast an AI can do things.

Let me reiterate that I donít think we will ever make true strong AI. I think itís just too complicated for us to figure out. But the obstacle isnít insufficient time.
__________________
"As long as it is admitted that the law may be diverted from its true purpose -- that it may violate property instead of protecting it -- then everyone will want to participate in making the law, either to protect himself against plunder or to use it for plunder. Political questions will always be prejudicial, dominant, and all-absorbing. There will be fighting at the door of the Legislative Palace, and the struggle within will be no less furious." - Bastiat, The Law
Ziggurat is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 8th October 2021, 12:52 AM   #45
HansMustermann
Penultimate Amazing
 
HansMustermann's Avatar
 
Join Date: Mar 2009
Posts: 19,358
Well, it kinda is at the time. It may not be in the future, but it is now.
__________________
Which part of "Ph'nglui mglw'nafh Cthulhu R'lyeh wgah'nagl fhtagn" don't you understand?
HansMustermann is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 8th October 2021, 07:04 AM   #46
RecoveringYuppy
Penultimate Amazing
 
Join Date: Nov 2006
Posts: 12,326
Originally Posted by HansMustermann View Post
That's the problem I see with basically just emulating a human brain and letting it learn: All those consecutively better world models come from actually getting that kind of experience. You actually need that kind of experience.
In a world where AI exists, what will stop us from doing this once with one machine and then copying the results in to all subsequent AIs to skip the training period?
RecoveringYuppy is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 8th October 2021, 08:01 AM   #47
theprestige
Penultimate Amazing
 
Join Date: Aug 2007
Location: The Antimemetics Division
Posts: 54,969
Originally Posted by RecoveringYuppy View Post
In a world where AI exists, what will stop us from doing this once with one machine and then copying the results in to all subsequent AIs to skip the training period?
Depends on how exactly the enlightened state is realized.

In a world where Natural Intelligence exists, what will stop us from doing this once with one person and then copying the results into all subsequent persons to skip the training period?

Turns out that copying the exact state of the electrochemical soup that represents the enlightened state is a problem that stops us cold. And that's before we even get to the problems of how to induce that in another brain without killing its owner.
__________________
There is no Antimemetics Division.

Last edited by theprestige; 8th October 2021 at 08:46 AM.
theprestige is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 8th October 2021, 08:18 AM   #48
Armitage72
Philosopher
 
Armitage72's Avatar
 
Join Date: Mar 2012
Location: Rochester, NY
Posts: 6,414
In the webcomic I mentioned, despite AIs being recognized as citizens, nobody knows how AIs are actually created. The scientist who is considered "the father of AI" doesn't know. The AIs themselves don't know. It's known that if you perform actions "A", "B", and "C" under conditions "X", "Y", and "Z", a sapient consciousness emerges, but nobody can figure out why.
Armitage72 is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 8th October 2021, 01:24 PM   #49
HansMustermann
Penultimate Amazing
 
HansMustermann's Avatar
 
Join Date: Mar 2009
Posts: 19,358
Originally Posted by RecoveringYuppy View Post
In a world where AI exists, what will stop us from doing this once with one machine and then copying the results in to all subsequent AIs to skip the training period?
Nothing whatsoever. But you still have to invest the time to do the first one.
__________________
Which part of "Ph'nglui mglw'nafh Cthulhu R'lyeh wgah'nagl fhtagn" don't you understand?
HansMustermann is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 8th October 2021, 03:06 PM   #50
theprestige
Penultimate Amazing
 
Join Date: Aug 2007
Location: The Antimemetics Division
Posts: 54,969
Originally Posted by HansMustermann View Post
Nothing whatsoever.
I don't think you can say that without a very clear and detailed idea of the implementation.
__________________
There is no Antimemetics Division.
theprestige is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 8th October 2021, 04:20 PM   #51
HansMustermann
Penultimate Amazing
 
HansMustermann's Avatar
 
Join Date: Mar 2009
Posts: 19,358
Someone would have to be pretty bloody stupid to make a computer that fundamentally doesn't allow backups. But then I suppose the idiocracy is bound to happen sooner or later.
__________________
Which part of "Ph'nglui mglw'nafh Cthulhu R'lyeh wgah'nagl fhtagn" don't you understand?
HansMustermann is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 8th October 2021, 04:26 PM   #52
theprestige
Penultimate Amazing
 
Join Date: Aug 2007
Location: The Antimemetics Division
Posts: 54,969
Originally Posted by HansMustermann View Post
Someone would have to be pretty bloody stupid to make a computer that fundamentally doesn't allow backups. But then I suppose the idiocracy is bound to happen sooner or later.
We're not talking about a computer. We're talking about an intelligence. For all we know, it may be impossible to implement intelligence in such a way that the intelligent state can be copied as such. In human brains, the memory store and the instruction processor are tightly coupled in a feedback loop that cannot be broken without destroying the intelligence. It might turn out that's the only way to do it.
__________________
There is no Antimemetics Division.

Last edited by theprestige; 8th October 2021 at 04:33 PM.
theprestige is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 8th October 2021, 04:43 PM   #53
RecoveringYuppy
Penultimate Amazing
 
Join Date: Nov 2006
Posts: 12,326
Yeah. The human brain doesn't allow for backups. If we produce AIs using something similar there may not be a backup/copy mechanism.
RecoveringYuppy is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 8th October 2021, 10:33 PM   #54
HansMustermann
Penultimate Amazing
 
HansMustermann's Avatar
 
Join Date: Mar 2009
Posts: 19,358
The human brain is basically a self-rewiring FPGA. Or rather, an imperial butt-load of FPGAs (the neural columns) around a massive bandwidth hub.

Yes, biology never needed to evolve a way to backup that FPGA, but it turns out we know how to backup and restore a FPGA we make ourselves. As in literally, we can backup and restore the "wiring" between those gates. And there's no reason we wouldn't come up with a way to do it if it's some different variation on that theme, even if it might involve a bit more circuitry for that, even if nature never needed such.

So I repeat myself: someone would have to be bloody stupid to come up with one that fundamentally can't be backed up. And not just one engineer has to wake up with an idea like, "hey, let's ditch backups and lose years of work if a lightning strikes". The whole chain of command above him has to be ok with that idea.

It COULD happen, but as I was saying, at that point you can know that the idiocracy is here.
__________________
Which part of "Ph'nglui mglw'nafh Cthulhu R'lyeh wgah'nagl fhtagn" don't you understand?

Last edited by HansMustermann; 8th October 2021 at 10:36 PM.
HansMustermann is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 8th October 2021, 10:38 PM   #55
RecoveringYuppy
Penultimate Amazing
 
Join Date: Nov 2006
Posts: 12,326
How do you know the brain is an FPGA (or that that is the sufficient part to make it intelligent)? How do you know we can make an FPGA we build intelligent? How do you know that the first way we build an artificial intelligence will not be harnessing a process analogous to biology that also doesn't include a backup mechanism as a goal?
RecoveringYuppy is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 8th October 2021, 11:21 PM   #56
HansMustermann
Penultimate Amazing
 
HansMustermann's Avatar
 
Join Date: Mar 2009
Posts: 19,358
I don't know if we'll decide to not include a backup mechanism. Just, as I was saying, I know that if anyone decides to sink billions into something that can be gone in one power surge or whatever, then we have finally reached the idiocracy point.
__________________
Which part of "Ph'nglui mglw'nafh Cthulhu R'lyeh wgah'nagl fhtagn" don't you understand?
HansMustermann is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 8th October 2021, 11:35 PM   #57
RecoveringYuppy
Penultimate Amazing
 
Join Date: Nov 2006
Posts: 12,326
You're missing the point.
RecoveringYuppy is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 8th October 2021, 11:48 PM   #58
HansMustermann
Penultimate Amazing
 
HansMustermann's Avatar
 
Join Date: Mar 2009
Posts: 19,358
Am I? I think I'm presenting a valid business point. If the risk vs reward is too high and you have no ways to mitigate that risk, you don't invest in whatever it is.

And I don't just mean the risk of not ending up with a good AI. I also mean the risk of losing whatever business data you had in it, and everything. It's the kind of thing that sinks your whole business.

Or if you want to sell it, now you have to convince the buyers it's ok to pay for something that could just disappear with their whole business data. Same deal.

I'm fairly confident that when that happens, I'll just mark it as idiocracy day on my calendar.
__________________
Which part of "Ph'nglui mglw'nafh Cthulhu R'lyeh wgah'nagl fhtagn" don't you understand?

Last edited by HansMustermann; 8th October 2021 at 11:53 PM.
HansMustermann is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 9th October 2021, 01:34 AM   #59
theprestige
Penultimate Amazing
 
Join Date: Aug 2007
Location: The Antimemetics Division
Posts: 54,969
Originally Posted by HansMustermann View Post
The human brain is basically a self-rewiring FPGA. Or rather, an imperial butt-load of FPGAs (the neural columns) around a massive bandwidth hub.

Yes, biology never needed to evolve a way to backup that FPGA, but it turns out we know how to backup and restore a FPGA we make ourselves. As in literally, we can backup and restore the "wiring" between those gates. And there's no reason we wouldn't come up with a way to do it if it's some different variation on that theme, even if it might involve a bit more circuitry for that, even if nature never needed such.

So I repeat myself: someone would have to be bloody stupid to come up with one that fundamentally can't be backed up. And not just one engineer has to wake up with an idea like, "hey, let's ditch backups and lose years of work if a lightning strikes". The whole chain of command above him has to be ok with that idea.

It COULD happen, but as I was saying, at that point you can know that the idiocracy is here.
An FPGA is a pure I instruction processor. It's not the data store. Human memory, our data store, appears to be an emergent state of our instruction processing. It's not a separate thing that gets fed into the processor. It doesn't exist without the processing.
__________________
There is no Antimemetics Division.
theprestige is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 9th October 2021, 01:35 AM   #60
theprestige
Penultimate Amazing
 
Join Date: Aug 2007
Location: The Antimemetics Division
Posts: 54,969
Originally Posted by HansMustermann View Post
I don't know if we'll decide to not include a backup mechanism. Just, as I was saying, I know that if anyone decides to sink billions into something that can be gone in one power surge or whatever, then we have finally reached the idiocracy point.
There may not be any other way to do it. You keep Insisting on a conclusion based on assumptions we cannot yet make.
__________________
There is no Antimemetics Division.
theprestige is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 9th October 2021, 01:44 AM   #61
theprestige
Penultimate Amazing
 
Join Date: Aug 2007
Location: The Antimemetics Division
Posts: 54,969
Originally Posted by HansMustermann View Post
Am I? I think I'm presenting a valid business point. If the risk vs reward is too high and you have no ways to mitigate that risk, you don't invest in whatever it is.

And I don't just mean the risk of not ending up with a good AI. I also mean the risk of losing whatever business data you had in it, and everything. It's the kind of thing that sinks your whole business.

Or if you want to sell it, now you have to convince the buyers it's ok to pay for something that could just disappear with their whole business data. Same deal.

I'm fairly confident that when that happens, I'll just mark it as idiocracy day on my calendar.
Why? We hire specialists and experts all the time, in the full knowledge that if they get hit by a bus... We'll be okay.

We may discover that while artificial expert systems of the same caliber cannot be backed up one restored, they can be trained to maturity much faster and with much more consistent results. We may decide this is worth investing in.

What is your evidence that AI will be restorable from backup? Business convenience? That's exactly my evidence for why nuclear power doesn't produce radioactive waste.
__________________
There is no Antimemetics Division.
theprestige is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 9th October 2021, 02:08 AM   #62
HansMustermann
Penultimate Amazing
 
HansMustermann's Avatar
 
Join Date: Mar 2009
Posts: 19,358
The difference is that you didn't invest billions in that guy and train him yourself from scratch for that position, like will be the case with the first AI.
__________________
Which part of "Ph'nglui mglw'nafh Cthulhu R'lyeh wgah'nagl fhtagn" don't you understand?
HansMustermann is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 9th October 2021, 07:14 AM   #63
RecoveringYuppy
Penultimate Amazing
 
Join Date: Nov 2006
Posts: 12,326
But civilization has invested billions in the first of a lot of things.
RecoveringYuppy is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 9th October 2021, 07:53 AM   #64
theprestige
Penultimate Amazing
 
Join Date: Aug 2007
Location: The Antimemetics Division
Posts: 54,969
Originally Posted by HansMustermann View Post
The difference is that you didn't invest billions in that guy and train him yourself from scratch for that position, like will be the case with the first AI.
You're still missing the point. It simply may not be possible to do intelligence any other way. Why are you assuming it will be?
__________________
There is no Antimemetics Division.
theprestige is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 9th October 2021, 08:25 AM   #65
Darat
Lackey
Administrator
 
Darat's Avatar
 
Join Date: Aug 2001
Location: South East, UK
Posts: 99,106
Originally Posted by theprestige View Post
Why? We hire specialists and experts all the time, in the full knowledge that if they get hit by a bus... We'll be okay.

We may discover that while artificial expert systems of the same caliber cannot be backed up one restored, they can be trained to maturity much faster and with much more consistent results. We may decide this is worth investing in.

What is your evidence that AI will be restorable from backup? Business convenience? That's exactly my evidence for why nuclear power doesn't produce radioactive waste.
Why wouldn't they? If they are based on physical processes using electronics we will be able to back them up and restore them. I do agree if we have to go down to a more say biological process, so we have "self altering" elements - like cells in a brain then we may not be able to backup and restore as easily or more accurately to the level of fidelity required to reproduce the "intelligence" we need.

However, there is nothing in principle that I know of that means that if you duplicate my brain to the nth degree that it would not give you another version of me so I don't know why we wouldn't be able to backup and restore.

One way around the problem of a "back up and restore" approach is not to have only one "intelligence" being fed the same input and go to a type of RAID array for the intelligence. Then if one fails you switch the output to another one. (See Saberhagen's Berserkers for RAID arrays of intelligence. )
__________________
I wish I knew how to quit you
Darat is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 9th October 2021, 09:28 AM   #66
RecoveringYuppy
Penultimate Amazing
 
Join Date: Nov 2006
Posts: 12,326
Originally Posted by HansMustermann View Post
Am I? I think I'm presenting a valid business point.
What if it's a military decision?
RecoveringYuppy is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 9th October 2021, 09:32 AM   #67
RecoveringYuppy
Penultimate Amazing
 
Join Date: Nov 2006
Posts: 12,326
Originally Posted by Darat View Post
However, there is nothing in principle that I know of that means that if you duplicate my brain to the nth degree that it would not give you another version of me so I don't know why we wouldn't be able to backup and restore.
I don't think anyone is arguing that an identical copy (including not just position but current velocity of all components) wouldn't be identical. I'm certainly not. I'm claiming the first one we build isn't necessarily going to be built on a RAID array or any other particular well specified thing you can think of.

If we build it in some way that more resembles the biological process of evolution we may not get something that can be backed up. The evolutionary process might lose the goal of taking a backup somewhere a long the way and wind up being just like our current brains, no current way to produce a copy/backup.
RecoveringYuppy is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 10th October 2021, 12:51 PM   #68
HansMustermann
Penultimate Amazing
 
HansMustermann's Avatar
 
Join Date: Mar 2009
Posts: 19,358
Ok, let's start again. We may not know entirely how the processing works, but we know how the information is stored: as the strength of synapses. It's ultimately no different than how information is stored in a QLC flash cell (and with comparable number of states), just in this case it's interspersed along the way between the processing units.

When a signal comes, you basically get a probability that the synapse will trigger, based on that stored strength. (And we're talking quantum level probability, which nixes any idea of determinism.) If it does, the strength goes up. If it's never used, after a while that strength goes down.

But essentially each neural column IS a self-rewiring FPGA, just each connection has a small memory cell. Or technically a self-incrementing/decrementing register and a random number generator rolled in one. But for the purpose of a backup, let's focus on the memory cell.

Now nature never needed to back that up. So in fact, not only it can't do that, but it has no way of even getting the synapse strength information per se. There is no way for the brain to even know stuff like "that synapse is at strength 0.3". You just send a signal down the line, and maybe nothing triggers. You don't know if it means there's no connection at all (strength 0) or it was nearly max strength, but you rolled a natural 1 so to speak, or what. Just that connection never triggers. Or conversely it does give the signal to the next processing units, but you don't know if it was a full strength connection, or it was a 0.1 strength and you rolled the lucky dice, or what.

So yes, the brain is fundamentally built in a way that can't be backed up. (Sorry fans of Upload, digital rapture, etc.) Because evolution never needed to even be able to access the raw state, it just needed something that does the job.

HOWEVER, there is no reason -- IF we go the way of emulating a brain in silicon, which is a big IF -- that we can't directly access that memory cell to back it up. We can have a different grid on top of the FPGA that basically lets you access it like it's a flash grid. We're at the point of making 3D chips anyway. Making the FPGA only in the odd rows and the read/write grid in the even rows would be fairly trivial.

And that's just if we emulate it in hardware.

But there's nothing to say we can't eventually emulate it in software, if we overshoot the necessary programming power enough. The brain may have a LOT of neurons, and each may need a LOT of transistors to emulate in hardware (not the least for that synapse functionality), but each of them is incredibly slow by silicon standards. There's no reason to assume we can't just have a CPU simulating a thousand of them for each pass, and still come out ahead.

In which case, meh, the information is in RAM anyway.

Most of the problem is really: we're nowhere near having that kind of hardware power yet.
__________________
Which part of "Ph'nglui mglw'nafh Cthulhu R'lyeh wgah'nagl fhtagn" don't you understand?

Last edited by HansMustermann; 10th October 2021 at 12:57 PM.
HansMustermann is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 10th October 2021, 12:55 PM   #69
HansMustermann
Penultimate Amazing
 
HansMustermann's Avatar
 
Join Date: Mar 2009
Posts: 19,358
Originally Posted by RecoveringYuppy View Post
What if it's a military decision?
You'd be surprised how many military decisions are ultimately business decisions. Because you have to get funding from Congress, some private companies have to bid, some lobbyists get involved, etc.

In fact, I can't think of any military hardware decision in the last couple of centuries that didn't go some form of that or another.
__________________
Which part of "Ph'nglui mglw'nafh Cthulhu R'lyeh wgah'nagl fhtagn" don't you understand?
HansMustermann is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 10th October 2021, 01:26 PM   #70
theprestige
Penultimate Amazing
 
Join Date: Aug 2007
Location: The Antimemetics Division
Posts: 54,969
Originally Posted by Darat View Post
Why wouldn't they? If they are based on physical processes using electronics we will be able to back them up and restore them. I do agree if we have to go down to a more say biological process, so we have "self altering" elements - like cells in a brain then we may not be able to backup and restore as easily or more accurately to the level of fidelity required to reproduce the "intelligence" we need.
It's not clear to me that a purely transistorized, logic-gate approach will produce the kind of intelligence we're talking about, or that it could be backed up even so.

Hans is assuming that it must be so. I think it's far too early in our investigation of the possibilities to assume that.
__________________
There is no Antimemetics Division.
theprestige is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 10th October 2021, 01:32 PM   #71
theprestige
Penultimate Amazing
 
Join Date: Aug 2007
Location: The Antimemetics Division
Posts: 54,969
Originally Posted by HansMustermann View Post
You'd be surprised how many military decisions are ultimately business decisions. Because you have to get funding from Congress, some private companies have to bid, some lobbyists get involved, etc.

In fact, I can't think of any military hardware decision in the last couple of centuries that didn't go some form of that or another.
I tend to think of military decisions as being somewhat the opposite of business decisions: It's amazing what you can accomplish when you don't have to make money for shareholders, you just have to get something done by whatever means at your disposal. Helicopters? Ridiculously inefficient, unless you have money to burn, and/or you absolutely need that sweet vertical takeoff and landing. Submarines? Ridiculously inefficient. But fantastic if you need a second-strike nuclear capability that's almost impossible to intercept. Yes, there's always a budget question of cost-effectiveness, but warfare isn't really like a business at all.

On the other hand, von Clausewitz argues that of the human activities of science, art, and commerce, waging war more closely resembles commerce than either of the other two. Make of that what you will.
__________________
There is no Antimemetics Division.

Last edited by theprestige; 10th October 2021 at 01:36 PM.
theprestige is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 10th October 2021, 01:36 PM   #72
theprestige
Penultimate Amazing
 
Join Date: Aug 2007
Location: The Antimemetics Division
Posts: 54,969
Originally Posted by HansMustermann View Post
Most of the problem is really: we're nowhere near having that kind of hardware power yet.
It may turn out that emulating a brain in transistorized hardware just isn't possible. Even if it checks out in theory, the material requirements may well be prohibitive.

Relevant xkcd.

So okay, I'll grant that maybe a matryoskha brain around a neutron star could emulate in pure hardware. But what would backing that up even look like?
__________________
There is no Antimemetics Division.
theprestige is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 10th October 2021, 10:53 PM   #73
HansMustermann
Penultimate Amazing
 
HansMustermann's Avatar
 
Join Date: Mar 2009
Posts: 19,358
Originally Posted by theprestige View Post
I tend to think of military decisions as being somewhat the opposite of business decisions: It's amazing what you can accomplish when you don't have to make money for shareholders, you just have to get something done by whatever means at your disposal. Helicopters? Ridiculously inefficient, unless you have money to burn, and/or you absolutely need that sweet vertical takeoff and landing. Submarines? Ridiculously inefficient. But fantastic if you need a second-strike nuclear capability that's almost impossible to intercept. Yes, there's always a budget question of cost-effectiveness, but warfare isn't really like a business at all.
There's a difference between something being a business decision and it being run like a business or having to turn a profit.

Besides, even in normal companies a lot of decisions are about risk management, i.e., avoiding a loss, rather than purely increasing profit this quarter. Since we were talking backups, they're the perfect example of that.
__________________
Which part of "Ph'nglui mglw'nafh Cthulhu R'lyeh wgah'nagl fhtagn" don't you understand?
HansMustermann is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 10th October 2021, 10:54 PM   #74
HansMustermann
Penultimate Amazing
 
HansMustermann's Avatar
 
Join Date: Mar 2009
Posts: 19,358
Originally Posted by theprestige View Post
It may turn out that emulating a brain in transistorized hardware just isn't possible. Even if it checks out in theory, the material requirements may well be prohibitive.
Well, THAT may very well turn out to be the case, especially with Moore's law having hard stalled.
__________________
Which part of "Ph'nglui mglw'nafh Cthulhu R'lyeh wgah'nagl fhtagn" don't you understand?
HansMustermann is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 11th October 2021, 04:35 AM   #75
W.D.Clinger
Illuminator
 
W.D.Clinger's Avatar
 
Join Date: Oct 2009
Posts: 4,104
Originally Posted by Darat View Post
Why wouldn't they? If they are based on physical processes using electronics we will be able to back them up and restore them.)
Originally Posted by HansMustermann View Post
HOWEVER, there is no reason -- IF we go the way of emulating a brain in silicon, which is a big IF -- that we can't directly access that memory cell to back it up.
Originally Posted by theprestige View Post
It may turn out that emulating a brain in transistorized hardware just isn't possible.
In a thread devoted to wild speculation, we might as well consider the possibility that advances in quantum computing allow us to emulate a brain in quantum hardware before we are able to do so using discrete logic.

The no-cloning theorem tells us we can't back up the quantum state of a quantum computer.
W.D.Clinger is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 13th October 2021, 03:54 PM   #76
theprestige
Penultimate Amazing
 
Join Date: Aug 2007
Location: The Antimemetics Division
Posts: 54,969
Originally Posted by HansMustermann View Post
Well, THAT may very well turn out to be the case, especially with Moore's law having hard stalled.
So much for your assumption that it must be possible that way, and that only an idiot wouldn't think to do it that way.
__________________
There is no Antimemetics Division.
theprestige is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 14th October 2021, 03:12 PM   #77
Dr. Keith
Not a doctor.
 
Dr. Keith's Avatar
 
Join Date: Jun 2009
Location: Texas
Posts: 23,795
Originally Posted by theprestige View Post
The idea was first patented in 1978.

So my question is this: If you fed an AI all of human history up to 1978, would it see the opportunity? Would it suggest such an innovation?

Or any other commercial innovation. If your AI can't recognize profit opportunities arising from human desire, and invent ways to profit from that desire, on par with actual humans who did recognize and did invent, then it's not much of an AI.

To be fair, most humans aren't on the same level as our greatest inventors and entrepreneurs. But if your can at least match the inventiveness and entrepreneurship of a five year old, that would be a good start.
My issue here is that you canít have a single AI and then test it against a single invention. My experience is that inventors are generally creative or generally of great intellect. Instead, most of them seem to be specifically creative and specifically smart about the same are they are specifically creative in.

Iíve had inventors with multiple patents in a field who couldnít even understand some bobble on my desk that was a working model of another clientís patented invention. They seem to be masters of deep knowledge rather than broad knowledge.

I realize you are proposing deep knowledge on all topics, thus giving the ai both deep knowledge and broad knowledge. But sometimes I wonder if the narrow deep knowledge somehow helps to focus their creativity.

All that musing aside, there are tons of patents filed every year. Many never even issue and yet some five million or so have issued in the US since your proposed date. So many that people are designing so to find out which of them are even worth taking a second look at.
__________________
Suffering is not a punishment not a fruit of sin, it is a gift of God.
He allows us to share in His suffering and to make up for the sins of the world. -Mother Teresa

If I had a pet panda I would name it Snowflake.
Dr. Keith is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 22nd October 2021, 03:27 PM   #78
Cat Not Included
Thinker
 
Join Date: Apr 2016
Posts: 166
Originally Posted by Armitage72 View Post
I still say a good test for AI will be being able to GM one of the more complex tabletop RPGs with a group of human players and respond to their actions with the flexibility of a human GM.
I've known lots and lots of human GMs who've failed that test...
Cat Not Included is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 22nd October 2021, 04:34 PM   #79
Mike Helland
Master Poster
 
Join Date: Nov 2020
Posts: 2,291
FWIW, this shows some big differences between how we think, and AI works:

https://www.youtube.com/watch?v=BS2la3C-TYc

YouTube Video This video is not hosted by the ISF. The ISF can not be held responsible for the suitability or legality of this material. By clicking the link below you agree to view content from an external website.
I AGREE
Mike Helland is online now   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Reply

International Skeptics Forum » General Topics » Science, Mathematics, Medicine, and Technology

Bookmarks

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump


All times are GMT -7. The time now is 11:54 PM.
Powered by vBulletin. Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.

This forum began as part of the James Randi Education Foundation (JREF). However, the forum now exists as
an independent entity with no affiliation with or endorsement by the JREF, including the section in reference to "JREF" topics.

Disclaimer: Messages posted in the Forum are solely the opinion of their authors.