Merged Artificial Intelligence Research: Supermathematics and Physics

ProgrammingGodJordan: A lie about my machine learning knowledge

Alright, you have demonstrated that you lack basic machine learning knowledge.
4 September 2017 ProgrammingGodJordan: A lie about my machine learning knowledge, months of math ignorance followed by parroting idiocy from his PDF.
He explicitly writes that DeepMind has an "Atari Q architecture" again :jaw-dropp!
What I wrote back on 29th March 2017 was
Repeating ignorance about supermanifolds does not change that they are not locally Euclidean as everyone reads that Wikipedia article you cited understands.

The phrase "persist in the euclidean regime" sounds like you have no idea about what the "super" part of "supermanifolds" comes from. This is the application of concepts from supersymmetry to manifolds which turns manifolds into explicitly non-Euclidean "regimes" both globally and locally.

Adding insults does not help - I know what neighborhood means in math. The set of points in the neighborhood of any point in a manifold can be Euclidean. The set of points in the neighborhood of any point in a supermanifold is never Euclidean.
That is what the definition of a supermanifold says. A set of coordinates obeying Grassmann algebra where elements anticommute is added to a set of Euclidean coordinates where elements commute. That gives a non-Euclidean manifold M.
That is what the Wikipedia article states by putting quotes around "flat" and "Euclidean" in the informal description of supermanifolds.
 
When you get back, ProgrammingGodJordan, think about explaining your "thought curvature" clearly in this thread since your PDF contains gibberish, incoherence, irrelevancy, etc.
  1. 8 August 2017 ProgrammingGodJordan: Ignorant math word salad on academia.edu (gibberish title and worse contents).
  2. 14 August 2017 ProgrammingGodJordan: Thought Curvature abstract starts with actual gibberish.
  3. 14 August 2017 ProgrammingGodJordan: Thought Curvature abstract that lies about your previous wrong definition.
  4. 14 August 2017 ProgrammingGodJordan: A Curvature abstract ends with ignorant gibberish: "Ergo the paradox axiomatizes".
  5. 16 August 2017 ProgrammingGodJordan: Thought Curvature DeepMind bad scholarship (no citations) and some incoherence
  6. 18 August 2017 ProgrammingGodJordan: Thought Curvature uetorch bad scholarship (no citations) and incoherence
  7. 18 August 2017 ProgrammingGodJordan: Thought Curvature irrelevant "childhood neocortical framework" sentence and missing citation.
  8. 18 August 2017 ProgrammingGodJordan: Thought Curvature "non-invariant fabric" gibberish.
  9. 18 August 2017 ProgrammingGodJordan: Thought Curvature Partial paradox reduction gibberish and missing citations.
For example acknowledge that DeepMind does not have an "atari q architecture".
A proper and clear description would be that DeepMind is a company. The company had implemented neural networks using a Q-learning architecture. Google supposedly brought the company after they demonstrated a machine that played 7 Atari games. The most famous machine to date from the company DeepMind played Go (and does not have a "go q architecture"!) to a professional 9-dan level (beat the world No.1 ranked player in 2017).
AlphaGo
 
Intriguingly, both the Google Deepmind paper, "Early Visual Concept Learning" (September 2016) and the paper of mine, entitled "Thought curvature" (May 2016):

(1) Consider combining somethings in machine learning called translation invariant, and translation variant paradigms (i.e. disentangling factors of variation)

(2) Do (1) particularly in the regime of reinforcement learning, causal laws of physics, and manifolds.


FOOTNOTE:
Notably, beyond the Deepmind paper, thought curvature describes the (machine learning related) algebra of Supermanifolds, instead of mere manifolds.


QUESTION:
Given particular streams of evidence..., is a degree of the super-manifold structure a viable path in the direction of mankind's likely last invention, Artificial General Intelligence?

The Google Deepmind paper on disentangled visual concept learning was very interesting, thanks for posting it. Developing general AI is of great interest to a lot of people since the applications are so vast. However, given all of the problems they noted from previous disentangled generative learning, I am skeptical as to whether this will become a major tool in object categorization, much less a tool used for general AI.

I am also surprised that you think that a generalized AI would be mankind's last invention. I think it is more likely that many different models of general AI will be developed since variations in their base structure of representative world models and predictive algorithms will provide vastly different results.



An interesting summary of the paper included the following.
Unsupervised disentangled factor learning from raw image data is a major open challenge in AI. Most previous attempts require a priori knowledge of the number and/or nature of the data generative factors...
Our main contributions are the following: 1) we show the importance of neuroscienceinspiredconstraints(datacontinuity,redundancyreductionandstatisticalindependence) for learning disentangled representations of continuous visual generative factors; 2) we devise a protocol to quantitatively compare the degree of disentanglement learnt by different models; and 3) we demonstrate how learning disentangled representations enables zero-shot inference and the emergence of basic visual concepts, such as “objectness”.

I would be very interested to learn more about your ideas on the subject, but I could not understand your paper. Could you explain what you mean by causal neural manifold, causal neural atom terms, and causal perturbation curvature?

Very interesting stuff. I think some of your conclusions might be a little overzealous, but I think that it's great topic to be engaged in.
 
The Google Deepmind paper on disentangled visual concept learning was very interesting, thanks for posting it. Developing general AI is of great interest to a lot of people since the applications are so vast. However, given all of the problems they noted from previous disentangled generative learning, I am skeptical as to whether this will become a major tool in object categorization, much less a tool used for general AI.

I am also surprised that you think that a generalized AI would be mankind's last invention. I think it is more likely that many different models of general AI will be developed since variations in their base structure of representative world models and predictive algorithms will provide vastly different results.



An interesting summary of the paper included the following.


I would be very interested to learn more about your ideas on the subject, but I could not understand your paper. Could you explain what you mean by causal neural manifold, causal neural atom terms, and causal perturbation curvature?

Very interesting stuff. I think some of your conclusions might be a little overzealous, but I think that it's great topic to be engaged in.

Those terms were deleted; although I defined them, it was causing too much misunderstanding for readers.
Updated document: https://www.researchgate.net/publication/316586028_Thought_Curvature_An_underivative_hypothesis
 
Last edited:
Supermathematics and Artificial General Intelligence

lQJjpAk.gif


This thread (also available in pdf form at the end of a github equivalent source here) is about attempts to build general artificial intelligence.

Artificial general intelligence is often described to likely be be mankind's last invention.

Edited by Agatha: 
Trimmed for rule 4. The text is available at the pdf linked.



VIII - Extra List of resources

I compiled a list of resources (beyond things cited throughout the papers) that may be helpful here.


IX - Questions

Does anybody here have good knowledge of supermathematics or related field, to give any input on the above?

If so is it feasible to pursue the model I present in thought curvature paper?

And if so, apart from the ones discussed in the paper, what type of
4P5rY64.png
(training samples) do you garner warrants reasonable experiments in the regime of the model I presented?
 
Last edited by a moderator:
Apart from that stupidly annoying swirly thing, how does this differ from you many other threads?

C'mon let's be fair, I like the swirly thing.

But anyway, how does this differ from the many other threads?
 
Freaky — Why Yoshua Bengio’s new “Consciousness Prior” paper is strange to me

[imgw=670]https://i.imgur.com/2MAv0lS.png[/imgw]

Yoshua’s new paper essentially deals with discussing non-trivial priors in some learning model, i.e. babies’ brains are pre-equipped with some “physics priors”, namely the ability for babies to intuitively know laws of physics, pertinently while learning by reinforcement.

I won’t go into mathematical detail regarding Bengio’s paper here, but you may better understand Bengio’s paper by viewing a paper of mine written last year. (A clear overview is found here).

Although the aforesaid overview entails the complex topic of supermanifolds, and thus somewhat goes beyond manifolds as Bengio’s paper entails, the overview is quite clear, and may help readers here to better digest Bengio’s paper.

On a separately fun note, I shall underline after the meme below, why Bengio’s paper is quite freaky to me.

cOLI17U.png


When I read the abstract, I quickly thought that Bengio was talking about learning some laws of physics, in conjunction with RL (reinforcement learning).

Surely enough, reading the entire paper he does mention babies and intuitive physics, and RL.

Last year I wrote a paper called “Thought Curvature”, about utilizing Supermathematics to learn laws of physics in tandem with RL (reinforcement learning) in the Supermanifold regime. (See a clear overview here)

Some differences are:
(a) Unlike Bengio’s paper, my paper presented a somewhat thorough way to do an experiment to test for the viability of the somewhat novel structure introduced in my paper.

(b) My papers lack the entire description about “mapping and verifying conscious actions in culture” via language, whereby my papers don’t intend to describe a framework for consciousness.

Footnote:
This is perhaps promising news though, it’s freaky to me as I didn’t think my papers were arXiv worthy yet, but Yoshua’s presentation here is perhaps changing my mind.
 
Last edited:
What does that have to do with babies and stupid, annoying swirly things?

Unfortunately, I currently cannot break down the topic anymore than I have done in the OP.

What do you not get about babies and their relation in developing artificial general intelligence?
 
Robot babies?

I don't know, why don't you explain.

(1) The aim is to build artificial general intelligence.

(2) Machine learning often concerns constraining algorithms with respect to biological examples.

(3) Babies are great examples of some non trivial basis for artificial general intelligence; babies are great examples of biological baseis we can use to inspire smart algorithms, especially in the aims of (1), regarding (2).

Does the above help?

Edit: Thanks for your question. I have edited all the documents (including the OP) to include the goal above (in relation to babies), as I hadn't detected that the data in the new 'What is the goal?' section wasn't obvious enough.
 
Last edited:
(1) The aim is to build artificial general intelligence.

(2) Machine learning often concerns constraining algorithms with respect to biological examples.

(3) Babies are great examples of some non trivial basis for artificial general intelligence; babies are great examples of biological baseis we can use to inspire smart algorithms, especially in the aims of (1), regarding (2).

Does the above help?


I've seen this idea before, in a John Sladek novel.
 
Last edited:
I don't make sense of your statement above.
What do you mean?

Picture, if you will, a view of the ceiling above you. Glance up at it, should that be of visual aid. Now imagine the bottom of a bottle partially obstructing your view of said ceiling. For at least one poster on the forum, your postings herald the imminent onset of this view, as surely as 'betwixt' follows...well, nothing at all.
 
(1) The aim is to build artificial general intelligence.

(2) Machine learning often concerns constraining algorithms with respect to biological examples.

(3) Babies are great examples of some non trivial basis for artificial general intelligence; babies are great examples of biological baseis we can use to inspire smart algorithms, especially in the aims of (1), regarding (2).

Does the above help?

I've seen this idea before, in a John Sladek novel.

The concept is also regular in machine learning:

(1) Building Machines That Learn and Think Like People

(2) Early Visual Concept Learning with Unsupervised Deep Learning

(3) The Consciousness Prior

....
 
Picture, if you will, a view of the ceiling above you. Glance up at it, should that be of visual aid. Now imagine the bottom of a bottle partially obstructing your view of said ceiling. For at least one poster on the forum, your postings herald the imminent onset of this view, as surely as 'betwixt' follows...well, nothing at all.

Now you've lost me completely..
What is it that you are trying to say?
 
You are obviously a very silly human then, according to the godlike one.

I have it on good authority that I am a minimally capable God, because I make stuff. Inorite? Cool, or what? I'm working on Commandments and **** now.
 
I aim to improve.
What is incoherent about the sentences in question?

Well no one seems to have a clue what you're saying, for one.

"Babies are great examples of some non trivial basis for artificial general intelligence; babies are great examples of biological baseis we can use to inspire smart algorithms, especially in the aims of (1), regarding (2)."

What does that even mean?
 
Well no one seems to have a clue what you're saying, for one.

"Babies are great examples of some non trivial basis for artificial general intelligence; babies are great examples of biological baseis we can use to inspire smart algorithms, especially in the aims of (1), regarding (2)."

What does that even mean?

Recall my earlier quote:

(1) The aim is to build artificial general intelligence.

(2) Machine learning often concerns constraining algorithms with respect to biological examples.

(3) Babies are great examples of some non trivial basis for artificial general intelligence; babies are great examples of biological baseis we can use to inspire smart algorithms, especially in the aims of (1), regarding (2).

Does the above help?

Following from (1) and (2), quite literally, babies are good biological examples which can perhaps inspire the construction of artificial general intelligence.

In particular, babies are intelligent agents, and we can use babies' behaviours as a non-trivial basis (a complex working example) that we can use to inspire the construction of smarter and smarter algorithms, that start out somewhat blank (as babies essentially do) and learn over time.
 

Back
Top Bottom