Super Artificial Intelligence, a naive approach

Joined
Feb 22, 2017
Messages
1,718
Location
Jamaica
(i)
Life's meaning probably occurs on the horizon of optimization:

(source: mit physicist, Jeremy England proposes new meaning of life)




(ii)
Today, artificial intelligence exceeds mankind in many human, cognitive tasks:

(source: can we build ai without losing control over it?)

(source: the wonderful and terrifying implications of computers that can learn)





(iii)
The creation of general artificial intelligence is so far, mankind's largely pertinent task, and this involves (i), i.e. optimization.

The human brain computes roughly 10^16 to 10^18 synaptic operations per second.




(iv)
Mankind has already created brain based models that achieve 10^14 of the above total in (iii).

If mankind isn't erased (via some catastrophe), on the horizon of Moore's Law, mankind will probably create machines, with human-level brain power (and relevantly, human-like efficiency), by at least 2020.




(v)
Using clues from from quantum mechanics, and modern machine learning, I have composed (am composing) a naive fabric in aims of absorbing some non-trivial intelligence's basis.

Paper + Starting Code (rudimentary): "thought curvature"




(vi)
Criticism is welcome/needed.
 
Sorry to bother you, but are you the inventor of non beliefism!? Can I get a photo with you?
 
Last edited:
ProgrammingGodJordan said:
........
The human brain computes roughly 10^16 to 10^18 synaptic operations per second. ......

Mankind has already created brain based models that achieve 10^14 of the above total..........
So, one ten thousandth the number. Tear up everything we know and re-write the dictionary.

Could you tell us what the error is, with the figures you highlighted?
 
Last edited:
I already did. You are mis-using "pertinent".

You highlighted 10^14 and 10^18 before adding the pertinent comment.

Why did you highlight those figures?

What did you mean by "one thousandth of the number"? (a comment you made under the figures)

So, what errors do you find in the figures you highlighted and criticized in reply #8?
 
Last edited:
Deepmind’s atari q architecture encompasses non pooling convolutions, therein generating object shift sensitivity, whence the model maximizes some reward over said shifts together with separate changing states for each sampled t state; translation non invariance. Separately, uetorch, encodes an object trajectory behaviour physics learner, particularly on pooling layers; translation invariance.

It is non-abstrusely observable, that the childhood neocortical framework pre-encodes certain causal physical laws in the neurons (Stahl et al), amalgamating in perceptual learning abstractions into non-childhood.

As such, it is perhaps exigent that non invariant fabric composes in the invariant, therein engendering time-space complex optimal causal, conscious artificial construction.

If this confluence is reasonable, is such paradoxical?

A genuine question: Was this written by AI code? The reason I ask is that several years ago I created a module that generated prose very similar to what we see here. Of course, it was all nonsense, but it was grammatically correct and thus appeared impressive to the casual viewer.
 
A genuine question: Was this written by AI code? The reason I ask is that several years ago I created a module that generated prose very similar to what we see here. Of course, it was all nonsense, but it was grammatically correct and thus appeared impressive to the casual viewer.

I wrote the paper.

Some related code, however crude exists in relation to paper.

The topics discussed are probably primarily common for undergrad machine learning students.
 
Last edited:
ProgrammingGodJordan said:
........
The human brain computes roughly 10^16 to 10^18 synaptic operations per second. ......

Mankind has already created brain based models that achieve 10^14 of the above total..........
So, one ten thousandth the number. Tear up everything we know and re-write the dictionary.

I still await your expression.

What errors do you find in the figures you highlighted and criticized in reply #8?

I shall continue to ask, until you respond, as I am unable to descry better figures than the ones I posted, which you appeared to point out to be wrong.
 
Last edited:
Super Artificial Intelligence, a naive approach

Yes, I certainly agree with that!!!! It definitely is!!!!!!
 
(i)
Life's meaning probably occurs on the horizon of optimization:

(source: mit physicist, Jeremy England proposes new meaning of life)
This cite is a layman's blurb that links to a lay article at Business Insider. The blurb contains such gems as:

According to England, the second law of thermodynamics gives life its meaning. The law states that entropy, i.e. decay, will continuously increase. Imagine a hot cup of coffee sitting at room temperature. Eventually, the cup of coffee will reach room temperature and stay there: its energy will have dissipated. Now imagine molecules swimming in a warm primordial ocean. England claims that matter will slowly but inevitably reorganize itself into forms that better dissipate the warm oceanic energy.​

I didn't bother to read the referenced lay article because Business Insider hates my adblocker. There may be some scholarly discussion of the "meaning of life", but PGJ hasn't cited it here. More than likely, though, is that "MIT physicist Jeremy England" was engaging in casual speculation on the topic, rather than serious scientific analysis. In any case, lay blurbs in Business Insider are not an auspicious start to a serious discussion of AI.

But at least I can translate (part of) PGJ's thesis. Once again, it's Underpants Gnomes:

1. If the meaning of life is to organize into more efficient energy-dissipation systems, then
2. ???
3. Therefore, superhuman AI is inevitable.
 
This cite is a layman's blurb that links to a lay article at Business Insider. The blurb contains such gems as:

But at least I can translate (part of) PGJ's thesis. Once again, it's Underpants Gnomes:

1. If the meaning of life is to organize into more efficient energy-dissipation systems, then
2. ???
3. Therefore, superhuman AI is inevitable.

(1)
See Jeremy England's work on dissipative adaption etc.
(as indicated in the article)


(2)
Machine learning algorithms, are optimization mechanisms, that organize into more and more energy efficient systems, as scientists extend their baseis.


(3)
Humans are energy efficient learning systems.


(4)
Modern machine learning concerns making more energy efficient systems, that approach (3).




Edit: let me 'connect more dots' for you:


(5) Sophisticated super-artificial intelligence could optimize other tasks in nature...
 
Last edited:
You can't do anything with certainty.

See uncertainty principle.

Heisenberg's uncertainty principle is a mathematical expression of the limits of knowledge of specific properties in wave-like systems. As Heisenberg put it:

One can never know with perfect accuracy both of those two important factors which determine the movement of one of the smallest particles—its position and its velocity. It is impossible to determine accurately both the position and the direction and speed of a particle at the same instant.

Heisenberg, W., Die Physik der Atomkerne, Taylor & Francis, 1952, p. 30. (from Wikipedia)

The Uncertainty Principle is not universally applicable (e.g., to "anything"), nor is it relevant to philosophical or colloquial notions of uncertainty. It is not applicable to fuelair's usage here. It has nothing to say about his confidence in your naivete.
 
(1)
See Jeremy England's work on dissipative adaption etc.
(as indicated in the article)

The article is a lay blurb that indicates almost nothing. It links to another lay article that I can't read because the website requires me to disable my adblocker first (which I will not do).

Have you read Jeremy England's work on "dissipative adaption"? Where is it published? Can you cite it here?

All you have cited so far is a Business Insider gossip column about a Business Insider "news" article that purports to be about Jeremy England's work. Have you actually read his work? Or have you just read the gossip column?
 
Heisenberg's uncertainty principle is a mathematical expression of the limits of knowledge of specific properties in wave-like systems. As Heisenberg put it:

One can never know with perfect accuracy both of those two important factors which determine the movement of one of the smallest particles—its position and its velocity. It is impossible to determine accurately both the position and the direction and speed of a particle at the same instant.

Heisenberg, W., Die Physik der Atomkerne, Taylor & Francis, 1952, p. 30. (from Wikipedia)

The Uncertainty Principle is not universally applicable (e.g., to "anything"), nor is it relevant to philosophical or colloquial notions of uncertainty. It is not applicable to fuelair's usage here. It has nothing to say about his confidence in your naivete.

Unless philosophy/colloquial terms are outside of the universe, uncertainty still applies.
 
Last edited:
The article is a lay blurb that indicates almost nothing. It links to another lay article that I can't read because the website requires me to disable my adblocker first (which I will not do).

Have you read Jeremy England's work on "dissipative adaption"? Where is it published? Can you cite it here?

All you have cited so far is a Business Insider gossip column about a Business Insider "news" article that purports to be about Jeremy England's work. Have you actually read his work? Or have you just read the gossip column?


http://www.englandlab.com/publications.html
 
Unless philosophy/colloquial terms are outside of the universe, uncertainty still applies.

Are you certain of that?

Besides, uncertainty may be a prudent approach to life's great questions, but the uncertainty principle in quantum mechanics only applies to quantum mechanics. There is no such thing as a general "uncertainty principle".
 
(1)
See Jeremy England's work on dissipative adaption etc.
(as indicated in the article)


(2)
Machine learning algorithms, are optimization mechanisms, that organize into more and more energy efficient systems, as scientists extend their baseis.


(3)
Humans are energy efficient learning systems.


(4)
Modern machine learning concerns making more energy efficient systems, that approach (3).




Edit: let me 'connect more dots' for you:


(5) Sophisticated super-artificial intelligence could optimize other tasks in nature...

But, to quote the estimable Kumar, is that A&F?
 
Are you certain of that?

Besides, uncertainty may be a prudent approach to life's great questions, but the uncertainty principle in quantum mechanics only applies to quantum mechanics. There is no such thing as a general "uncertainty principle".

(1)
Silly query, for there is no certainty, as far as science goes.

(2)
Keep in mind that we don't end at the macroscale. (i.e. Philosophy does not exist without the microscale)
 
Last edited:
Heisenberg gets pulled over for speeding. "Sir, do you know how fast you were going?" the cop demands. "No," Heisenberg replies, "but I know exactly where I am."
 
I still await your expression.

What errors do you find in the figures you highlighted and criticized in reply #8?

I shall continue to ask, until you respond, as I am unable to descry better figures than the ones I posted, which you appeared to point out to be wrong.

Oh come on, it's very easy to understand. It wasn't about an error in your figures.

You claim that by 2020 humans can build a human-level AI, which you say runs at 10^16 to 10^18 synaptic operations per second.
As evidence you point out that models that run at 10^14 already exist.

10^14 is one ten-thousandth of 10^18, the top of your human range.

MikeG wasn't questioning your numbers, he was expressing his doubts in your assertion that our AI's will become ten thousand times as complex within the space of three years.
 
Last edited:

Back
Top Bottom