Merged Artificial Intelligence Research: Supermathematics and Physics



"In the 60s, Marvin Minsky assigned a couple of undergrads to spend the summer programming a computer to use a camera to identify objects in a scene. He figured they'd have the problem solved by the end of the summer. Half a century later, we're still working on it."
 
Last edited:
My take on it is that he's saying that you can create estimates of probability using Bayesian statistics for separate abstract elements. Then you can combine the estimates to form a stronger prediction about the environment for a given intelligent agent. In Bayesian statistics a diffuse prior only provides vague predictions and therefore isn't very useful. His suggestion is that you can combine diffuse priors to make a much stronger prediction.

He seems to be trying to solve the problem of an intelligent agent acting within an environment without sufficient information about the environment. This is an ongoing problem with AI and machine learning. He seems to see this as a pattern matching problem which is why he refers to recurrent neural networks.

None of that is so bad, but after that it pretty much falls apart. He suggests that this mechanism would be useful for awareness and consciousness. He shows misunderstanding of focus and seems to believe the language of thought theory. For example, if you did try to use his structure for awareness you would fall into the Frame problem. Maybe he isn't aware of it. His notion about focus is ludicrous since it could give you a random, divergent, or convergent process. This is a common problem with bottom up approaches. To date there has been no supporting evidence for a language of thought other than that it goes along with computational theory.

That's my opinion about it. In other words, he might well be able to make a contribution to Bayesian statistics but I don't see this as advancing AI in the least.

Your opinion is off;

(1) It doesn't appear he is providing a framework to fully describe the structure for awareness; you may notice section 3, the consideration section where one quickly finds out that there is a suggestion (and no actual, detailed instruction) on how to try to build according to him, "a toy example" to describe the theory he presents.

(2) Based on (1), the remainder of your response describing how his paper "falls apart", is off.

The phrase toy example, especially in the context above, should show that the paper does not frame (nor intends to frame) any complete solution for awareness.
 

I don't know how your post above relates to the OP, but here are some useful links:

Deep Learning AI Better Than Your Doctor at Finding Cancer:
https://singularityhub.com/2015/11/...ai-better-than-your-doctor-at-finding-cancer/


Self-taught artificial intelligence beats doctors at predicting heart attacks:
http://www.sciencemag.org/news/2017...igence-beats-doctors-predicting-heart-attacks


Here are a sequence cognitive fields/tasks, where sophisticated artificial neural models exceed human-kind:

1) Language translation (eg: Skype 50+ languages)
2) Legal-conflict-resolution (eg: 'Watson')
3) Self-driving (eg: 'otto-Self Driving' )
5) Disease diagnosis (eg: 'Watson')
6) Medicinal drug prescription (eg: 'Watson')
7) Visual Product Sorting (eg: 'Amazon Corrigon' )
8) Help Desk Assistance ('eg: Digital Genius)
9) Mechanical Cucumber Sorting (eg: 'Makoto's Cucumber Sorter')
10) Financial Analysis (eg: 'SigFig')
11) E-Discovery Law (eg: ' Social Science Research Network.')
12) Anesthesiology (eg: 'SedaSys')
13) Music composition (eg: 'Emily')
14) Go (eg: 'Alpha Go')
n) etc, etc


The Rise of the Machines – Why Automation is Different this Time:
https://www.youtube.com/watch?v=WSKi8HfcxEk

Will artificial intelligence take your job?:
https://www.youtube.com/watch?v=P_-wn8ghcoY

Humans need not apply:
https://www.youtube.com/watch?v=7Pq-S557XQU

The wonderful and terrifying implications of computers that can learn:
https://www.youtube.com/watch?v=t4kyRyKyOpo

And also, a cool xkcd:

 
Last edited:

That comic is an excellent example of how it's not possible for us to be living in a simulation. The fact that the protagonist literally needs an infinite universe isn't just a problem faced by a computer made of rocks and sand.

Either we don't live in a simulation, or computing works differently outside the Matrix

But let's put those quibbles aside and dig into some physics, shall we? Theoretical physicists from Oxford just published Quantized gravitational responses, the sign problem, and quantum complexity in Science Advances, in which they document the geometric complexity of computing the location of particles that make up the universe. It turns out that figuring out these particles' locations scales at order n-squared, meaning the amount of computing power needed doubles with each additional particle, which means that "storing information about a couple of hundred electrons would require a computer memory that would physically require more atoms than exist in the universe."

Then there's the data transfer problem.

4KxfPM2.jpg
 
ProgrammingGodJordan: Looks like an expanded incoherent document

Have you updated your document to remove the gibberish that is "Thought Curvature "?
  1. 8 August 2017 ProgrammingGodJordan: Ignorant math word salad on academia.edu (gibberish title and worse contents).
  2. 14 August 2017 ProgrammingGodJordan: Thought Curvature abstract starts with actual gibberish.
  3. 14 August 2017 ProgrammingGodJordan: Thought Curvature abstract that lies about your previous wrong definition.
  4. 14 August 2017 ProgrammingGodJordan: A Curvature abstract ends with ignorant gibberish: "Ergo the paradox axiomatizes".
  5. 16 August 2017 ProgrammingGodJordan: Thought Curvature DeepMind bad scholarship (no citations) and some incoherence
  6. 18 August 2017 ProgrammingGodJordan: Thought Curvature uetorch bad scholarship (no citations) and incoherence
  7. 18 August 2017 ProgrammingGodJordan: Thought Curvature irrelevant "childhood neocortical framework" sentence and missing citation.
  8. 18 August 2017 ProgrammingGodJordan: Thought Curvature "non-invariant fabric" gibberish.
  9. 18 August 2017 ProgrammingGodJordan: Thought Curvature Partial paradox reduction gibberish and missing citations.
4 October 2017 ProgrammingGodJordan: Looks like an expanded incoherent document starting with title: "Thought Curvature: An underivative hypothesis""
4 October 2017 ProgrammingGodJordan: "An underivative hypothesis": A abstract of incoherent word salad linking to a PDF of worse gibberish.
Some Markov receptive C∞π(Rnπ) , reasonably permits uniform symbols on the boundary of Rn, betwixt some Uα, of φi; particularly on some input space of form η . (See preliminary encoding).
The link is to an even worse "Supermanifold Hypothesis (via Deep Learning)" PDF with nonsensical abstract of
If any homeomorphic transition in some neighbourhood in an euclidean space Rn yields ϕ(x,θ)Tw for wi, θ ϵ Rn, then reasonably, some homeomorphic transition sequence in some euclidean superspace C∞(Rn) yields ϕ(x,θ,θ)Tw for wi, θ ϵ Rn; θ ϵ some resultant map sequence over θ via ϕ, pertinently, abound some parametric oscillation paradigm, containing Zλ.[12]

Pertinently, Rn → form R0|n applies, on the horizon of the bosonic Riccati.[12]
Other than advertising your word & math salad PDFs, you seem to be
  • Going on about the trivial fact that babies learn and that their learning processes may be a model for AI learning.
  • Have a fantasy that the other posters are ignorant about programming and AI with posting of irrelevant tutorials.
 
Last edited:
ProgrammingGodJordan : "Supermathematics ...": the first word in the title is a lie

Next is the PDF "Supermathematics and Artificial General Intelligence" which does have a coherent abstract:
I clearly unravel how I came to invent the supermanifold hypothesis in deep learning, (a component in another description called 'thought curvature') in relation to quantum computation.
However:

4 October 2017 ProgrammingGodJordan: "Supermathematics ...": the first word in the title is a lie because supermathematics is not AI.
Supermathematics is the branch of mathematical physics which applies the mathematics of Lie superalgebras to the behaviour of bosons and fermions.
The behavior of bosons and fermions are not machine learning :eye-poppi.
 
ProgrammingGodJordan: "Supermathematics ...": Wrong "manifold learning frameworks

4 October 2017 ProgrammingGodJordan: "Supermathematics ...": The "manifold learning frameworks" link is wrong.
There are no "manifold learning frameworks" in
Disentangling factors of variation in deep representations using adversarial training. There are 3 instances of the word manifold referring to the data. The frameworks are Generative Adversarial Networks (GAN) and Variational Auto-Encoders (VAE) which this paper combines.
 
4 October 2017 ProgrammingGodJordan: "Supermathematics ...": The "manifold learning frameworks" link is wrong.

There are no "manifold learning frameworks" in

Disentangling factors of variation in deep representations using adversarial training. There are 3 instances of the word manifold referring to the data. The frameworks are Generative Adversarial Networks (GAN) and Variational Auto-Encoders (VAE) which this paper combines.



That’s the problem with folks who get too used to bamboozling people with sciency-sounding gibberish. Eventually they come someplace like this and get their ass handed to them by people who see through them.
 
halleyscomet said:
That’s the problem with folks who get too used to bamboozling people with sciency-sounding gibberish. Eventually they come someplace like this and get their ass handed to them by people who see through them.

@Halleyscomet, RealityCheck had already been shown to lack basic Machine Learning know how.

For example, RealityCheck demonstrated words, that indicated that he or she had not been aware of the basic fact, that deep learning models could include or exclude pooling, something the typical undergrad Machine Learning student would discover.

See the scenario here.

Here is a quick spoiler, saved just for this occasion:



ProgrammingGodJordan said:
[IMGw=180]http://i.imgur.com/MyFzMcl.jpg[/IMGw]


PART A

It's time to escape that onset of self-denial Reality Check.

Okay, let us unravel your errors:

(1) Why did you lie and express that 'any point in a supermanifold...is never euclidean', despite contrasting scientific evidence?

(2) Why ignore that you hadn't known that deep learning models, could include or exclude pooling layers?

(3) From your blunder in (2) above, why ignore that atari q did not include pooling for pretty clear reinforcement learning reasons (as I had long expressed in my thought curvature paper)?

(4) Why continuously accuse me of supposedly expressing that 'all super-manifolds were locally euclidean' contrary to contrasting evidence? Why do my words "Supermanifold may encode as "essentially flat euclidean super space" fabric" translate strictly to "Supermanifolds are euclidean" to you?
(accusation source 1, accusation source 2, accusation source 3)





PART B

Why Reality Check was wrong (relating to question 1):


Why Reality Check was wrong, (relating to question 2 and 3):

[imgw=150]http://i.imgur.com/JYrZOW4.jpg[/imgw]


Into the introduction and:
15 August 2017 ProgrammingGodJordan: Ignorant nonsense about Deepmind.

You are demonstrably wrong, as you will see below.



ProgrammingGodJordan said:
Deepmind’s atari q architecture encompasses non-pooling convolutions
DeepMind is a "neural network that learns how to play video games in a fashion similar to that of humans". It can play several Atari games. It does not have an architecture related to those Atari games. What DeepMind does have is "a convolutional neural network, with a novel form of Q-learning".


What is the relevance of your line above?

Here is a more detailed, intuitive, mathematical description of mine, regarding deepmind's flavour of deep q learning (written in 2016):

https://www.quora.com/Artificial-In...p-Q-networks-DQN-work/answer/Jordan-Bennett-9




I have found 1 Google DeepMind paper about the neural network architecture that explicitly includes pooling layers but not as an implemented architecture element, Exploiting Cyclic Symmetry in Convolutional Neural Networks.

What is missing in the PDF is any references for DeepMind.

(1)
My thought curvature paper is unavoidably valid, in expressing that deepmind did not use pooling layers in AtariQ model. (See (2) below).




(2)
Don't you know any machine learning?

Don't you know that convolutional layers can be in a model, without pooling layers?
PaUaBx9.png


WHY NO POOLING LAYERS (FOR THIS PARTICULAR SCENARIO)?

In particular, for eg, pooling layers enable translation in-variance, such that object detection can occur, regardless of position in an image. This is why deepmind left it out; the model is quite sensitive to changes in embedding/entities' positions per frame, so the model can reinforce itself by Q-updating.


SOME RESOURCES TO HELP TO PURGE YOUR IGNORANCE:

(a) Deepmind's paper.

(b) If (a) is too abstruse, see this breakdown, why atari q left out pooling layers. (A clear, similar explanation similar to the 'WHY NO POOLING LAYERS (FOR THIS PARTICULAR SCENARIO)?' section above, or as is long written in thought curvature paper)




FOOTNOTE:
It is no surprise that deepmind used pooling in another framework. Pooling layers are used in deep learning all the time, and convolutions can either include, or exclude pooling. (Deep learning basics)

Why Reality Check was wrong (relating to question 4):



No where had I supposedly stated that "all supermanifolds are locally Euclidean".

In fact, my earlier post (which preceded your accusation above) clearly expressed that "Supermanifold may encode as 'essentially flat euclidean super space' fabric".

No where above expresses that all supermanifolds were locally euclidean. Why bother to lie?



 
Last edited:
4 October 2017 ProgrammingGodJordan: "Supermathematics ...": The "manifold learning frameworks" link is wrong.
There are no "manifold learning frameworks" in
Disentangling factors of variation in deep representations using adversarial training. There are 3 instances of the word manifold referring to the data. The frameworks are Generative Adversarial Networks (GAN) and Variational Auto-Encoders (VAE) which this paper combines.

What are you on about above?

Are you disagreeing with my prior statement that disentangling factors aligns with manifold learning?
 
Next is the PDF "Supermathematics and Artificial General Intelligence" which does have a coherent abstract:

However:

4 October 2017 ProgrammingGodJordan: "Supermathematics ...": the first word in the title is a lie because supermathematics is not AI.

The behavior of bosons and fermions are not machine learning :eye-poppi.

For reality's sake, please look at the thought curvature paper, for more than 5 minutes.

You will notice a source in that paper, concerning Super Symmetry at brain scale.

That has something to do with something called the bosonic riccati.

I explain the details in a github document here (See item 2).
 
Last edited:
Why should this thread deal with an image of what looks like mathematical gibberish?

CIpHftz.jpg


Thought Curvature doesn't appear to be "mathematical gibberish" to apparently smart people from other places on the web.

Examples:

(1) Discussion on science forum:
http://www.scienceforums.net/topic/109496-supermathematics-and-artificial-general-intelligence/

The conversations in the science forum above, lead to another conversation with a user that had participated in the aforesaid conversation.
Warning the following image is quite large:
ALkXW6Z.png



(2) Discussion on physics overflow:
https://www.physicsoverflow.org/39603/possible-create-transverse-ising-compatible-hamiltonian


etc

What is it that you don't understand?

Why do you garner your words, (demonstrating lack of understanding) necessitates that thought curvature is suddenly supposedly "gibberish"?
 
Last edited:
ProgrammingGodJordan: Quote the cited description of manifold learning frameworks

What are you on about above?
An inability to understand what you read or maybe even write :eye-poppi!
4 October 2017 ProgrammingGodJordan: "Supermathematics ...": The "manifold learning frameworks" link is wrong because the paper does not have any manifold learning frameworks[[/B]

But just in case:
4 October 2017 ProgrammingGodJordan: Quote the description of manifold learning frameworks in the paper you cited.
 
Last edited:
ProgrammingGodJordan: Links to people basically ignoring him

Thought Curvature doesn't appear to be "mathematical gibberish" to apparently smart people from other places on the web.
4 October 2017 ProgrammingGodJordan: Rather ignorantly links to people basically ignoring him!
A couple of thread on other forums with a handful of posts or comments.

Mordred for example suggests that you need to study to make any progress.

In the other forum you admit that you do not have a college level of education or training in physics (and thus the required math skills).
Unfortunately, my knowledge is very limited, as I lack at minimum a Bachelors physics degree, or any training in physics, so the method outlined in the super Hamiltonian paper above, was the easiest entry point I could garner of based on evidence observed thus far.
 
4 October 2017 ProgrammingGodJordan: Rather ignorantly links to people basically ignoring him!
A couple of thread on other forums with a handful of posts or comments.

Mordred for example suggests that you need to study to make any progress.

Of what relevance is this to the OP?

As I mentioned in reply 194, Mordred also went on in personal inbox, to answer some questions that helped to lead to thought curvature's current form.

RealityCheck said:
In the other forum you admit that you do not have a college level of education or training in physics (and thus the required math skills)

Yes, I did. (Recall that it is I that linked you to said forum?)
However, this does not suddenly instantiate that thought curvature is "mathematical gibberish" as you would like to incite.
See the same forum once more.
 
Last edited:
Of what relevance is this to the OP?
Your post is not the OP nor is reply 191.
4 October 2017 ProgrammingGodJordan: Rather ignorantly links to people basically ignoring him!

But you did list Mordred's messages to you so:
4 October 2017 ProgrammingGodJordan: Lists messages form someone mostly ignoring his work!
Mordred describes a tiny bit of the mathematics and physics of QFT. Mordred ignores your work. Mordred mentions one of your citations favorably. He does not mention that this is a year and a half old preprint with no sign of publication. But it is clear that quantum computing should give advantages over classical computing in AI.
 
Last edited:
Have you updated your document to remove the gibberish that is "Thought Curvature "?


Other than advertising your word & math salad PDFs, you seem to be
  • Going on about the trivial fact that babies learn and that their learning processes may be a model for AI learning.
  • Have a fantasy that the other posters are ignorant about programming and AI with posting of irrelevant tutorials.

That babies learn (in relation to machine learning) is not a "trivial fact", and forms crucial studies w.r.t. the hard problem of constructing artificial general intelligence, unbeknownst to you.
When will you learn to produce sensible feedback?
 
An inability to understand what you read or maybe even write :eye-poppi!
4 October 2017 ProgrammingGodJordan: "Supermathematics ...": The "manifold learning frameworks" link is wrong because the paper does not have any manifold learning frameworks[[/B]

But just in case:
4 October 2017 ProgrammingGodJordan: Quote the description of manifold learning frameworks in the paper you cited.


Smh.
How do you manage to contradict yourself so often?

"Your initial words: 'manifold learning frameworks' link is wrong.
There are no 'manifold learning frameworks' in
Disentangling factors of variation in deep representations using adversarial training."

Then right after that, in the same response you mention:

Your following words: "There are 3 instances of the word manifold referring to the data."

These are telling signs that you lack machine learning know how.
What is it you think the learning algorithm is doing, with that manifold aligned data?
 
Your post is not the OP nor is reply 191.
4 October 2017 ProgrammingGodJordan: Rather ignorantly links to people basically ignoring him!

But you did list Mordred's messages to you so:
4 October 2017 ProgrammingGodJordan: Lists messages form someone mostly ignoring his work!
Mordred describes a tiny bit of the mathematics and physics of QFT. Mordred ignores your work. Mordred mentions one of your citations favorably. He does not mention that this is a year and a half old preprint with no sign of publication. But it is clear that quantum computing should give advantages over classical computing in AI.

You do recognize QFT relates non-trivially with my work?

So How could Mordred be ignoring my work, while at the same time discussing QFT (which is quite pertinent to my work)?

You do see the irony in that don't you?
 
Last edited:
ProgrammingGodJordan: A I stated manifold learning frameworks is in the paper lie

4 October 2017 ProgrammingGodJordan: "Supermathematics ...": The "manifold learning frameworks" link is wrong because the paper does not have any manifold learning frameworks[[/B]

4 October 2017 ProgrammingGodJordan: Quote the description of manifold learning frameworks in the paper you cited.

And now:
4 October 2017 ProgrammingGodJordan: It is a lie that I stated that manifold learning frameworks is in the paper.
This is what I wrote:
4 October 2017 ProgrammingGodJordan: "Supermathematics ...": The "manifold learning frameworks" link is wrong.
There are no "manifold learning frameworks" in
Disentangling factors of variation in deep representations using adversarial training. There are 3 instances of the word manifold referring to the data. The frameworks are Generative Adversarial Networks (GAN) and Variational Auto-Encoders (VAE) which this paper combines.
 

I can't do much more than the response here.

You will have to sort out your errors yourself.

RealityCheck, based on your prior blunders, the following may prove helpful for you:

ProgrammingGodJordan said:
Please consider:

(1) https://www.youtube.com/watch?v=HBxCHonP6Ro (clear programming tutorials)

(2) https://www.coursera.org/learn/machine-learning (good machine learning tutorial)

(3) https://www.youtube.com/watch?v=79pmNdyxEGo (very good youtube deep q learning tutorial)



Footnote:

If anyone else has anything sensible feedback, please observe this conversation here, regarding thought curvature, as a helpful premise.
 
I can't do much more than the response here.
So you cannot answer a simple question?
4 October 2017 ProgrammingGodJordan: Quote the description of manifold learning frameworks in the paper you cited.
Then we are left with
  1. 8 August 2017 ProgrammingGodJordan: Ignorant math word salad on academia.edu (gibberish title and worse contents).
  2. 14 August 2017 ProgrammingGodJordan: Thought Curvature abstract starts with actual gibberish.
  3. 14 August 2017 ProgrammingGodJordan: Thought Curvature abstract that lies about your previous wrong definition.
  4. 14 August 2017 ProgrammingGodJordan: A Curvature abstract ends with ignorant gibberish: "Ergo the paradox axiomatizes".
  5. 16 August 2017 ProgrammingGodJordan: Thought Curvature DeepMind bad scholarship (no citations) and some incoherence
  6. 18 August 2017 ProgrammingGodJordan: Thought Curvature uetorch bad scholarship (no citations) and incoherence
  7. 18 August 2017 ProgrammingGodJordan: Thought Curvature irrelevant "childhood neocortical framework" sentence and missing citation.
  8. 18 August 2017 ProgrammingGodJordan: Thought Curvature "non-invariant fabric" gibberish.
  9. 18 August 2017 ProgrammingGodJordan: Thought Curvature Partial paradox reduction gibberish and missing citations.
  10. 4 October 2017 ProgrammingGodJordan: Looks like an expanded incoherent document starting with title: "Thought Curvature: An underivative hypothesis"
  11. 4 October 2017 ProgrammingGodJordan: "An underivative hypothesis": An abstract of incoherent word salad linking to a PDF of worse gibberish.
  12. 4 October 2017 ProgrammingGodJordan: "Supermathematics ...": The "manifold learning frameworks" link is wrong because the paper does not have any manifold learning frameworks
  13. 4 October 2017 ProgrammingGodJordan: Rather ignorantly links to people basically ignoring him in 2 forum threads!
  14. 4 October 2017 ProgrammingGodJordan: It is a lie that I stated that manifold learning frameworks is in the paper.
and:
4 October 2017 ProgrammingGodJordan: Lists messages form someone mostly ignoring his work!
 
Last edited:
So you cannot answer a simple question?
4 October 2017 ProgrammingGodJordan: Quote the description of manifold learning frameworks in the paper you cited.
Then we are left with
  1. 8 August 2017 ProgrammingGodJordan: Ignorant math word salad on academia.edu (gibberish title and worse contents).
  2. 14 August 2017 ProgrammingGodJordan: Thought Curvature abstract starts with actual gibberish.
  3. 14 August 2017 ProgrammingGodJordan: Thought Curvature abstract that lies about your previous wrong definition.
  4. 14 August 2017 ProgrammingGodJordan: A Curvature abstract ends with ignorant gibberish: "Ergo the paradox axiomatizes".
  5. 16 August 2017 ProgrammingGodJordan: Thought Curvature DeepMind bad scholarship (no citations) and some incoherence
  6. 18 August 2017 ProgrammingGodJordan: Thought Curvature uetorch bad scholarship (no citations) and incoherence
  7. 18 August 2017 ProgrammingGodJordan: Thought Curvature irrelevant "childhood neocortical framework" sentence and missing citation.
  8. 18 August 2017 ProgrammingGodJordan: Thought Curvature "non-invariant fabric" gibberish.
  9. 18 August 2017 ProgrammingGodJordan: Thought Curvature Partial paradox reduction gibberish and missing citations.
  10. 4 October 2017 ProgrammingGodJordan: Looks like an expanded incoherent document starting with title: "Thought Curvature: An underivative hypothesis"
  11. 4 October 2017 ProgrammingGodJordan: "An underivative hypothesis": An abstract of incoherent word salad linking to a PDF of worse gibberish.
  12. 4 October 2017 ProgrammingGodJordan: "Supermathematics ...": The "manifold learning frameworks" link is wrong because the paper does not have any manifold learning frameworks
  13. 4 October 2017 ProgrammingGodJordan: Rather ignorantly links to people basically ignoring him in 2 forum threads!
  14. 4 October 2017 ProgrammingGodJordan: It is a lie that I stated that manifold learning frameworks is in the paper.
and:
4 October 2017 ProgrammingGodJordan: Lists messages form someone mostly ignoring his work!


[IMGw=200]http://i.imgur.com/sA6PAz9.jpg[/IMGw]

Your words constantly ironically display ignorance.

QFT is non trivially related to my work. (See this extra list here in relation to "thought curvature" compiled by myself, constituting some QFT stuff)

So how exactly does Mordred supposedly mostly ignore my work, by discussing QFT?
 
........QFT is non trivially related to my work..........

Comprehension fail. Non trivial comprehension fail betwixt gibberish. Take a few days to read the following:

The claim wasn't that you ignore lists to do with your work, it was that you quoted someone but ignored their work. I assume this is because it doesn't agree with whatever theory you're blathering on about now.
 
Comprehension fail. Non trivial comprehension fail betwixt gibberish. Take a few days to read the following:

The claim wasn't that you ignore lists to do with your work, it was that you quoted someone but ignored their work. I assume this is because it doesn't agree with whatever theory you're blathering on about now.

85TO9hx.gif


I am unable to parse your response above.

Would you care to try again, in a cohesive manner?
 
No, it reads perfectly well, as you well know.

By "cohesive", did you actually mean something like "comprehensible"? Because cohesive is just the wrong word.
 
Last edited:
No, it reads perfectly well, as you well know.

By "cohesive", did you actually mean something like "comprehensible"? Because cohesive is just the wrong word.

Edited by Agatha: 
Edited for breach of rule 0 and rule 10.
 
Last edited by a moderator:
@RealityCheck

Also, why not pursue Artificial Intelligence if possible?

(1) Suzanne Gildert left Dwave Quantum Computer Company to start her on Artificial Intelligence Lab: https://youtu.be/JBWc09b6LnM?t=1303

[IMGw=650]https://i.imgur.com/5JmNznK.png[/IMGw]



(2) As another example, Max Tegmark expressed that physicists had long neglected to define the observer in much of the equations. (The observer being the intelligent agent - https://youtu.be/jXBfXNW6Bxo?t=1977 )

Now Tegmark is doing AI work: https://arxiv.org/abs/1608.08225

[IMGw=650]https://i.imgur.com/RXSw7Zh.png[/IMGw]
 
Last edited:
@Halleyscomet, RealityCheck had already been shown to lack basic Machine Learning know how.

I can see why your ideas have no traction in the larger scientific communities. You respond to criticism of your claims with accusations of the other person lacking understanding. You respond not with a coherent rebuttal, but insults. With such an attitude the quality of your ideas is meaningless, as you are actively discouraging people from considering them. It's akin to a chef cooking a steak, then covering it with spittle. Nobody is going to eat it and, as a result, nobody will be able to judge the quality of the steak or its preparation.

You are your own worst enemy and have gone to great lengths to sabotage yourself. It's sad really. If you have any good ideas they'll be ignored until a more competent communicator reinvents them, or palatalizes them from you.



I wonder if there are any mathematicians or AI researchers in this thread who are not above a bit of academic plagiarism. I certainly hope there are not.
 
Last edited:
I can see why your ideas have no traction in the larger scientific communities. You respond to criticism of your claims with accusations of the other person lacking understanding. You respond not with a coherent rebuttal, but insults. With such an attitude the quality of your ideas is meaningless, as you are actively discouraging people from considering them. It's akin to a chef cooking a steak, then covering it with spittle. Nobody is going to eat it and, as a result, nobody will be able to judge the quality of the steak or its preparation.

You are your own worst enemy and have gone to great lengths to sabotage yourself. It's sad really. If you have any good ideas they'll be ignored until a more competent communicator reinvents them, or palatalizes them from you.



I wonder if there are any mathematicians or AI researchers in this thread who are not above a bit of academic plagiarism. I certainly hope there are not.

(1) I had not accused him or her of any such thing, her/his blunder is clearly observable, whether or not I exist to point out such blunder. (See here)

(2) Although I know of recent single author papers that are gaining traction, even without detailed results let alone some substantive ways on how to implement the things proposed in those papers, (an example is this paper by Bengio, one of the pioneers of Deep Learning)

... although thought curvature does more to provide ways to perform experiments, unlike papers like Bengio's above, there is still a lot of work to be done until I can submit the papers to strong journals.
 

Back
Top Bottom