ISF Logo   IS Forum
Forum Index Register Members List Events Mark Forums Read Help

Go Back   International Skeptics Forum » General Topics » Science, Mathematics, Medicine, and Technology
 


Welcome to the International Skeptics Forum, where we discuss skepticism, critical thinking, the paranormal and science in a friendly but lively way. You are currently viewing the forum as a guest, which means you are missing out on discussing matters that are of interest to you. Please consider registering so you can gain full use of the forum features and interact with other Members. Registration is simple, fast and free! Click here to register today.
Tags artificial intelligence , consciousness , Edward Witten , Max Tegmark

Reply
Old 15th August 2017, 09:32 PM   #81
Reality Check
Penultimate Amazing
 
Join Date: Mar 2008
Location: New Zealand
Posts: 20,305
16 August 2017 ProgrammingGodJordan: Thought Curvature DeepMind bad scholarship (no citations) and some incoherence
PDF
Quote:
Deepmind’s atari q architecture encompasses non-pooling convolutions, therein generating object shift sensitivity, whence the model maximizes some reward over said shifts together with separate changing states for each sampled t state; translation non-invariance
I have covered the "atari q" nonsense (no Atari or q architecture for DeepMind playing atria games using Q-learning). There is the bad scholarship of no supporting citations and some incoherence. This may be an attempt to say that DeepMind recognizes moving objects such as sprites in a video game.
Reality Check is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 15th August 2017, 09:33 PM   #82
ProgrammingGodJordan
Banned
 
Join Date: Feb 2017
Location: Jamaica
Posts: 1,718
Originally Posted by Reality Check View Post
16 August 2017 ProgrammingGodJordan: Demonstrates an inability to read - my post was about other Grassmanian nonsense he posted!
15 August 2017 ProgrammingGodJordan: Grassmann number ignorance and nonsense. is about nonsense n a 30 March 2017 post.
You must observe by now, that supermanifolds may bear euclidean behaviour. (See euclidean supermanifold reference)

Where the above is valid, grassmann algebra need not apply, as long stated.
ProgrammingGodJordan is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 15th August 2017, 09:38 PM   #83
Reality Check
Penultimate Amazing
 
Join Date: Mar 2008
Location: New Zealand
Posts: 20,305
Originally Posted by ProgrammingGodJordan View Post
Typo Correction, ...temporal difference learning paradigm representing distributions over eta.
ProgrammingGodJordan, you linked to an irrelevant Wikipedia article, unless you are doing numerical simulations of fluids.
Direct numerical simulation
Quote:
A direct numerical simulation (DNS)[1] is a simulation in computational fluid dynamics in which the Navier–Stokes equations are numerically solved without any turbulence model.
Reality Check is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 15th August 2017, 09:38 PM   #84
ProgrammingGodJordan
Banned
 
Join Date: Feb 2017
Location: Jamaica
Posts: 1,718
Originally Posted by Reality Check View Post
16 August 2017 ProgrammingGodJordan: Thought Curvature DeepMind bad scholarship (no citations) and some incoherence
PDF

I have covered the "atari q" nonsense (no Atari or q architecture for DeepMind playing atria games using Q-learning). There is the bad scholarship of no supporting citations and some incoherence. This may be an attempt to say that DeepMind recognizes moving objects such as sprites in a video game.
Wrong.

It is no fault of mine, that you are unable to reduce basic English.

Anyway, it was you that expressed nonsense:

Originally Posted by Reality Check View Post
Originally Posted by ProgrammingGodJordan
Deepmind’s atari q architecture encompasses non-pooling convolutions
I have found 1 Google DeepMind paper about the neural network architecture that explicitly includes pooling layers but not as an implemented architecture element, Exploiting Cyclic Symmetry in Convolutional Neural Networks.

What is missing in the PDF is any references for DeepMind.
You falsely believed that pooling layers were crucial to models with convolutional layers, even despite the fact that atari Q did not include any such pooling layer.

The evidence is clearly observable:





Originally Posted by Reality Check View Post
Into the introduction and:
15 August 2017 ProgrammingGodJordan: Ignorant nonsense about Deepmind.
You are demonstrably wrong, as you will see below.



Originally Posted by Reality Check View Post
DeepMind is a "neural network that learns how to play video games in a fashion similar to that of humans". It can play several Atari games. It does not have an architecture related to those Atari games. What DeepMind does have is "a convolutional neural network, with a novel form of Q-learning".

What is the relevance of your line above?

Here is a more detailed, intuitive, mathematical description of mine, regarding deepmind's flavour of deep q learning (written in 2016):

https://www.quora.com/Artificial-Int...rdan-Bennett-9




Originally Posted by Reality Check View Post
I have found 1 Google DeepMind paper about the neural network architecture that explicitly includes pooling layers but not as an implemented architecture element, Exploiting Cyclic Symmetry in Convolutional Neural Networks.

What is missing in the PDF is any references for DeepMind.
(1)
My thought curvature paper is unavoidably valid, in expressing that deepmind did not use pooling layers in AtariQ model. (See (2) below).




(2)
Don't you know any machine learning?

Don't you know that convolutional layers can be in a model, without pooling layers?



WHY NO POOLING LAYERS (FOR THIS PARTICULAR SCENARIO)?

In particular, for eg, pooling layers enable translation in-variance, such that object detection can occur, regardless of position in an image. This is why deepmind left it out; the model is quite sensitive to changes in embedding/entities' positions per frame, so the model can reinforce itself by Q-updating.


SOME RESOURCES TO HELP TO PURGE YOUR IGNORANCE:

(a) Deepmind's paper.

(b) If (a) is too abstruse, see this breakdown, why atari q left out pooling layers. (A clear, similar explanation similar to the 'WHY NO POOLING LAYERS (FOR THIS PARTICULAR SCENARIO)?' section above, or as is long written in thought curvature paper)




FOOTNOTE:
It is no surprise that deepmind used pooling in another framework. Pooling layers are used in deep learning all the time, and convolutions can either include, or exclude pooling. (Deep learning basics)

Last edited by ProgrammingGodJordan; 15th August 2017 at 09:39 PM.
ProgrammingGodJordan is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 15th August 2017, 09:39 PM   #85
Reality Check
Penultimate Amazing
 
Join Date: Mar 2008
Location: New Zealand
Posts: 20,305
Originally Posted by ProgrammingGodJordan View Post
You must observe by ....
I have observed that you cannot understand what you read, specifically that post:
15 August 2017 ProgrammingGodJordan: Grassmann number ignorance and nonsense.
16 August 2017 ProgrammingGodJordan: Demonstrates an inability to read - my post was about other Grassmanian nonsense he posted!
Reality Check is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 15th August 2017, 09:45 PM   #86
ProgrammingGodJordan
Banned
 
Join Date: Feb 2017
Location: Jamaica
Posts: 1,718
Originally Posted by Reality Check View Post
ProgrammingGodJordan, you linked to an irrelevant Wikipedia article, unless you are doing numerical simulations of fluids.
Direct numerical simulation
See thought curvature paper.
That eta is related to this, as presented there.
ProgrammingGodJordan is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 15th August 2017, 10:16 PM   #87
ProgrammingGodJordan
Banned
 
Join Date: Feb 2017
Location: Jamaica
Posts: 1,718



PART A

It's time to escape that onset of self-denial Reality Check.

Okay, let us unravel your errors:

(1) Why did you lie and express that 'any point in a supermanifold...is never euclidean', despite contrasting scientific evidence?

(2) Why ignore that you hadn't known that deep learning models, could include or exclude pooling layers?

(3) From your blunder in (2) above, why ignore that atari q did not include pooling for pretty clear reinforcement learning reasons (as I had long expressed in my thought curvature paper)?

(4) Why continuously accuse me of supposedly expressing that 'all super-manifolds were locally euclidean' contrary to contrasting evidence? Why do my words "Supermanifold may encode as "essentially flat euclidean super space" fabric" translate strictly to "Supermanifolds are euclidean" to you?
(accusation source 1, accusation source 2, accusation source 3)





PART B

Why Reality Check was wrong (relating to question 1):


Why Reality Check was wrong, (relating to question 2 and 3):




Originally Posted by Reality Check View Post
Into the introduction and:
15 August 2017 ProgrammingGodJordan: Ignorant nonsense about Deepmind.
You are demonstrably wrong, as you will see below.



Originally Posted by Reality Check View Post
Originally Posted by ProgrammingGodJordan
Deepmind’s atari q architecture encompasses non-pooling convolutions
DeepMind is a "neural network that learns how to play video games in a fashion similar to that of humans". It can play several Atari games. It does not have an architecture related to those Atari games. What DeepMind does have is "a convolutional neural network, with a novel form of Q-learning".

What is the relevance of your line above?

Here is a more detailed, intuitive, mathematical description of mine, regarding deepmind's flavour of deep q learning (written in 2016):

https://www.quora.com/Artificial-Int...rdan-Bennett-9




Originally Posted by Reality Check View Post
I have found 1 Google DeepMind paper about the neural network architecture that explicitly includes pooling layers but not as an implemented architecture element, Exploiting Cyclic Symmetry in Convolutional Neural Networks.

What is missing in the PDF is any references for DeepMind.
(1)
My thought curvature paper is unavoidably valid, in expressing that deepmind did not use pooling layers in AtariQ model. (See (2) below).




(2)
Don't you know any machine learning?

Don't you know that convolutional layers can be in a model, without pooling layers?



WHY NO POOLING LAYERS (FOR THIS PARTICULAR SCENARIO)?

In particular, for eg, pooling layers enable translation in-variance, such that object detection can occur, regardless of position in an image. This is why deepmind left it out; the model is quite sensitive to changes in embedding/entities' positions per frame, so the model can reinforce itself by Q-updating.


SOME RESOURCES TO HELP TO PURGE YOUR IGNORANCE:

(a) Deepmind's paper.

(b) If (a) is too abstruse, see this breakdown, why atari q left out pooling layers. (A clear, similar explanation similar to the 'WHY NO POOLING LAYERS (FOR THIS PARTICULAR SCENARIO)?' section above, or as is long written in thought curvature paper)




FOOTNOTE:
It is no surprise that deepmind used pooling in another framework. Pooling layers are used in deep learning all the time, and convolutions can either include, or exclude pooling. (Deep learning basics)


Why Reality Check was wrong (relating to question 4):


Originally Posted by Reality Check View Post
No where had I supposedly stated that "all supermanifolds are locally Euclidean".

In fact, my earlier post (which preceded your accusation above) clearly expressed that "Supermanifold may encode as 'essentially flat euclidean super space' fabric".

No where above expresses that all supermanifolds were locally euclidean. Why bother to lie?

Last edited by ProgrammingGodJordan; 15th August 2017 at 10:18 PM.
ProgrammingGodJordan is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 15th August 2017, 10:23 PM   #88
ProgrammingGodJordan
Banned
 
Join Date: Feb 2017
Location: Jamaica
Posts: 1,718
Originally Posted by Reality Check View Post
You need observe once more, my prior quote:

Originally Posted by ProgrammingGodJordan
You must observe by now, that supermanifolds may bear euclidean behaviour. (See euclidean supermanifold reference)

Where the above is valid, grassmann algebra need not apply, as long stated.
Otherwise, why bother to ignore the evidence?

How shall ignoring the evidence benefit your education?
ProgrammingGodJordan is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 16th August 2017, 01:44 AM   #89
Roboramma
Philosopher
 
Roboramma's Avatar
 
Join Date: Feb 2005
Location: Shanghai
Posts: 9,756
Originally Posted by ProgrammingGodJordan View Post
Irrelevant. Max Tegmark, is also a physicist, that has not undergone official artificial intelligence training, and yet, he has already contributed important work in the field of machine learning.

Tegmark presents consciousness as a mathematical problem, while Witten presents it as a likely forever unsolvable mystery.
I didn't suggest that being a physicist would prevent him from making contributions to AI. I suggested that it wouldn't guarantee that he would. Showing that other physicists have made such contributions would address the first argument, but not the second.

Similarly, people who wear red hats aren't necessarily going to be able to make breakthroughs in AI. Finding a picture of an AI researcher who has made breakthroughs wearing a red hat wouldn't change that fact.




Quote:
It is unavoidable, he could contribute; manifolds (something Edward works on) applies empirically in machine learning.

One need not be a nobel prize winning physicist to observe the above.
I actually think that it's reasonable to think he might be able to make some sort of a contribution, though I wouldn't wager whether it would be large or small. But you haven't addressed the point that his time is finite. He can either spend any particular minute of his time thinking about and working on physics or on AI, but not both. Again, I suspect that he is the best judge of how that time is best spent.
__________________
"... when people thought the Earth was flat, they were wrong. When people thought the Earth was spherical they were wrong. But if you think that thinking the Earth is spherical is just as wrong as thinking the Earth is flat, then your view is wronger than both of them put together."
Isaac Asimov
Roboramma is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 16th August 2017, 11:42 AM   #90
ProgrammingGodJordan
Banned
 
Join Date: Feb 2017
Location: Jamaica
Posts: 1,718
Originally Posted by Roboramma View Post


I actually think that it's reasonable to think he might be able to make some sort of a contribution, though I wouldn't wager whether it would be large or small. But you haven't addressed the point that his time is finite. He can either spend any particular minute of his time thinking about and working on physics or on AI, but not both. Again, I suspect that he is the best judge of how that time is best spent.
Consider a prior quote of mine, you may have missed:

Originally Posted by ProgrammingGodJordan
It is noteworthy that physicists aim to unravel the cosmos' mysteries, and so it is a mystery as to why Witten would select not to partake amidst the active machine learning field, especially given that:

(1) Manifolds apply non-trivially in machine learning.

(2) AI is one of mankind's most profound tools.

(3) AI is already performing nobel prize level tasks, very very efficiently.

(4) AI may need only be mankind's last invention.
ProgrammingGodJordan is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 17th August 2017, 07:34 PM   #91
Reality Check
Penultimate Amazing
 
Join Date: Mar 2008
Location: New Zealand
Posts: 20,305
Thumbs down ProgrammingGodJordan: Thought Curvature uetorch bad scholarship (no citations)

18 August 2017 ProgrammingGodJordan: Thought Curvature uetorch bad scholarship (no citations) and incoherence
PDF
Quote:
Separately, uetorch, encodes an object trajectory behaviour physics learner, particularly on pooling layers; translation invariance
A mish mash of words not meaning much.
There is a "uetorch" open source environment using the Torch deep learning environment.

Last edited by Reality Check; 17th August 2017 at 07:35 PM.
Reality Check is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 17th August 2017, 07:51 PM   #92
Reality Check
Penultimate Amazing
 
Join Date: Mar 2008
Location: New Zealand
Posts: 20,305
Thumbs down ProgrammingGodJordan: Thought Curvature irrelevant "childhood neocortical framework"

18 August 2017 ProgrammingGodJordan: Thought Curvature irrelevant "childhood neocortical framework" sentence and missing citation.
PDF
Quote:
It is non-abstrusely observable, that the childhood neocortical framework pre-encodes certain causal physical laws in the neurons (Stahl et al), amalgamating in perceptual learning abstractions into non-childhood.
That sentence is the only "Stahl" on the web page displaying the PDF!
I am getting the impression that English is a second language for the author or they are stringing together science words and thinking it makes sense.
Reality Check is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 17th August 2017, 07:58 PM   #93
Reality Check
Penultimate Amazing
 
Join Date: Mar 2008
Location: New Zealand
Posts: 20,305
Thumbs down ProgrammingGodJordan: Thought Curvature "non-invariant fabric" gibberish

18 August 2017 ProgrammingGodJordan: Thought Curvature "non-invariant fabric" gibberish.
PDF
Quote:
As such, it is perhaps exigent that non-invariant fabric composes in the invariant, therein engendering time-space complex optimal causal, conscious artificial construction. If this confluence is reasonable, is such paradoxical?
Everyone can read that this paragraph is gibberish and invalid English.
A total non sequitur (not "As such" ) into "fabric".
Reality Check is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 17th August 2017, 08:02 PM   #94
Reality Check
Penultimate Amazing
 
Join Date: Mar 2008
Location: New Zealand
Posts: 20,305
Thumbs down ProgrammingGodJordan: Thought Curvature Partial paradox reduction gibberish

18 August 2017 ProgrammingGodJordan: Thought Curvature Partial paradox reduction gibberish and missing citations.
PDF
Quote:
Partial paradox reduction
Paradoxical strings have been perturbed to reduce in factor variant/invariant manifold interaction paradigms (Bengio et al, Kihyuk et al), that effectively learn to disentangle varying factors.
Reality Check is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 17th August 2017, 08:34 PM   #95
Reality Check
Penultimate Amazing
 
Join Date: Mar 2008
Location: New Zealand
Posts: 20,305
Thumbs down ProgrammingGodJordan: A lie about what I wrote in a post

A crazily formatted post leads to:
18 August 2017 ProgrammingGodJordan: A lie about what I wrote in a post.
I did not write 'any point in a supermanifold...is never euclidean' in my 29th March 2017
Quote:
Repeating ignorance about supermanifolds does not change that they are not locally Euclidean as everyone reads that Wikipedia article you cited understands.
Locally means a small region.
For others:
A point in a supermanifold has non-Euclidean components and so cannot be Euclidean.
Roger Penrose has a few pages on supermanifolds in 'The Road To Reality' and (N.B. from memory) gives the simplest example: Real numbers R with an anti-commuting generator ε "where εε = - εε whence ε2 = 0". For every a and b in R there is a corresponding a + εb. I visualize this as extending R into a very weird plane.

18 August 2017 ProgrammingGodJordan: A fantasy that I did not know deep learning models could include or exclude pooling layers.
15 August 2017 ProgrammingGodJordan: Ignorant nonsense about Deepmind
Quote:
DeepMind is a "neural network that learns how to play video games in a fashion similar to that of humans". It can play several Atari games. It does not have an architecture related to those Atari games. What DeepMind does have is "a convolutional neural network, with a novel form of Q-learning". I have found 1 Google DeepMind paper about the neural network architecture that explicitly includes pooling layers but not as an implemented architecture element, Exploiting Cyclic Symmetry in Convolutional Neural Networks.
I already knew about their use in convolutional neural networks so I went looking for their possible use for DeepMind.

18 August 2017 ProgrammingGodJordan: Repeated "atari q" gibberish when DeepMind is not an Atari machine and has no "q" (does have Q-learning)

18 August 2017 ProgrammingGodJordan: "Supermanifold may encode as "essentially flat euclidean super space"" obsession again.
I translate that as ignorance about supermanifolds. It is a lie I translate that ignorance to "Supermanifolds are euclidean" because you know that I know supermanifolds are not Euclidean.

Last edited by Reality Check; 17th August 2017 at 08:59 PM.
Reality Check is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 31st August 2017, 04:30 PM   #96
ProgrammingGodJordan
Banned
 
Join Date: Feb 2017
Location: Jamaica
Posts: 1,718
Supermathematics and Artificial General Intelligence / Thought Curvature




Intriguingly, both the Google Deepmind paper, "Early Visual Concept Learning" (September 2016) and the paper of mine, entitled "Thought curvature" (May 2016):

(1) Consider combining somethings in machine learning called translation invariant, and translation variant paradigms (i.e. disentangling factors of variation)

(2) Do (1) particularly in the regime of reinforcement learning, causal laws of physics, and manifolds.


FOOTNOTE:
Notably, beyond the Deepmind paper, thought curvature describes the (machine learning related) algebra of Supermanifolds, instead of mere manifolds.


QUESTION:
Given particular streams of evidence..., is a degree of the super-manifold structure a viable path in the direction of mankind's likely last invention, Artificial General Intelligence?


Edited by Agatha:  Edited as the 'thought curvature' link is dead. Please go to this link: https://www.researchgate.net/publica...ive_hypothesis











Signature:

Last edited by Agatha; 3rd October 2017 at 11:56 AM.
ProgrammingGodJordan is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 31st August 2017, 05:30 PM   #97
Mojo
Mostly harmless
 
Mojo's Avatar
 
Join Date: Jul 2004
Posts: 29,317
__________________
"You got to use your brain." - McKinley Morganfield

"The poor mystic homeopaths feel like petted house-cats thrown at high flood on the breaking ice." - Leon Trotsky
Mojo is online now   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 31st August 2017, 05:55 PM   #98
RussDill
Philosopher
 
Join Date: Oct 2003
Location: Charleston
Posts: 5,399
I'm confused, is this your area of research? Are you submitting papers?
__________________
The woods are lovely, dark and deep
but i have promises to keep
and lines to code before I sleep
And lines to code before I sleep
RussDill is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 31st August 2017, 06:02 PM   #99
Argumemnon
World Maker
 
Argumemnon's Avatar
 
Join Date: Oct 2005
Location: In the thick of things
Posts: 67,304
Originally Posted by RussDill View Post
I'm confused, is this your area of research?
His area of research is Truthtm.
__________________
<Roar!>

Argumemnon is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 31st August 2017, 06:44 PM   #100
John Jones
Penultimate Amazing
 
John Jones's Avatar
 
Join Date: Apr 2009
Location: Iowa USA
Posts: 11,361
Originally Posted by ProgrammingGodJordan View Post
[imgw=350][...]

Do you ever get tired of hearing yourself talk?
__________________
Credibility is not a boomerang. If you throw it away, it's not coming back.
John Jones is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 31st August 2017, 07:18 PM   #101
Reality Check
Penultimate Amazing
 
Join Date: Mar 2008
Location: New Zealand
Posts: 20,305
Thumbs down Thought curvature gibberish, incoherence, irrelevancy, etc.

Originally Posted by ProgrammingGodJordan View Post
Intriguingly, ...
Advertising a PDF full of gibberish already addressed in an existing thread and associated with another incoherent "paper" addressed in yet another thread !
A list summarizing this has thought curvature items and there are other post you have ignored.
Thought curvature gibberish, incoherence, irrelevancy, etc.
  1. 8 August 2017 ProgrammingGodJordan: Ignorant math word salad on academia.edu (gibberish title and worse contents).
  2. 14 August 2017 ProgrammingGodJordan: Thought Curvature abstract starts with actual gibberish.
  3. 14 August 2017 ProgrammingGodJordan: Thought Curvature abstract that lies about your previous wrong definition.
  4. 14 August 2017 ProgrammingGodJordan: A Curvature abstract ends with ignorant gibberish: "Ergo the paradox axiomatizes".
  5. 16 August 2017 ProgrammingGodJordan: Thought Curvature DeepMind bad scholarship (no citations) and some incoherence
  6. 18 August 2017 ProgrammingGodJordan: Thought Curvature uetorch bad scholarship (no citations) and incoherence
  7. 18 August 2017 ProgrammingGodJordan: Thought Curvature irrelevant "childhood neocortical framework" sentence and missing citation.
  8. 18 August 2017 ProgrammingGodJordan: Thought Curvature "non-invariant fabric" gibberish.
  9. 18 August 2017 ProgrammingGodJordan: Thought Curvature Partial paradox reduction gibberish and missing citations.

Last edited by Reality Check; 31st August 2017 at 07:20 PM.
Reality Check is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 31st August 2017, 07:26 PM   #102
Reality Check
Penultimate Amazing
 
Join Date: Mar 2008
Location: New Zealand
Posts: 20,305
Originally Posted by ProgrammingGodJordan View Post
See thought curvature paper.
I replied to your post where you defined eta with an link to an irrelevant Wikipedia article with a definition used in fluid mechanics.
The Kolmogorov scale eta is a parameter of a fluid:
Quote:
where ν is the kinematic viscosity and ε is the rate of kinetic energy dissipation
But since you brought it up. The original post of gibberish:
Quote:
"Simply", it consists of manifolds as models for concept representation, in conjunction with policy π - a temporal difference learning paradigm representing distributions over eta.
has lead to
1 September 2017 ProgrammingGodJordan: A lie about "distributions over eta" being in his thought curvature PDF.
There is no eta at all in the current PDF!

Last edited by Reality Check; 31st August 2017 at 07:36 PM.
Reality Check is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 31st August 2017, 08:53 PM   #103
ProgrammingGodJordan
Banned
 
Join Date: Feb 2017
Location: Jamaica
Posts: 1,718
Originally Posted by RussDill View Post
I'm confused, is this your area of research? Are you submitting papers?
My area of research is computer science, particularly in Artificial Intelligence.

I am not trained in machine learning, university-wise, but I do research anyway.





Signature:

Last edited by ProgrammingGodJordan; 31st August 2017 at 08:55 PM.
ProgrammingGodJordan is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 31st August 2017, 08:56 PM   #104
ProgrammingGodJordan
Banned
 
Join Date: Feb 2017
Location: Jamaica
Posts: 1,718
Originally Posted by Reality Check View Post
I replied to your post where you defined eta with an link to an irrelevant Wikipedia article with a definition used in fluid mechanics.
The Kolmogorov scale eta is a parameter of a fluid:


But since you brought it up. The original post of gibberish:

has lead to
1 September 2017 ProgrammingGodJordan: A lie about "distributions over eta" being in his thought curvature PDF.
There is no eta at all in the current PDF!
Eta (η) simply refers to input space on which thought curvature structure may absorb/evaluate.





Signature:
ProgrammingGodJordan is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 31st August 2017, 09:12 PM   #105
ProgrammingGodJordan
Banned
 
Join Date: Feb 2017
Location: Jamaica
Posts: 1,718
Originally Posted by Reality Check View Post
A crazily formatted post leads to:
18 August 2017 ProgrammingGodJordan: A lie about what I wrote in a post.
I did not write 'any point in a supermanifold...is never euclidean' in my 29th March 2017

Locally means a small region.
For others:
A point in a supermanifold has non-Euclidean components and so cannot be Euclidean.
Roger Penrose has a few pages on supermanifolds in 'The Road To Reality' and (N.B. from memory) gives the simplest example: Real numbers R with an anti-commuting generator ε "where εε = - εε whence ε2 = 0". For every a and b in R there is a corresponding a + εb. I visualize this as extending R into a very weird plane.

18 August 2017 ProgrammingGodJordan: A fantasy that I did not know deep learning models could include or exclude pooling layers.
15 August 2017 ProgrammingGodJordan: Ignorant nonsense about Deepmind

I already knew about their use in convolutional neural networks so I went looking for their possible use for DeepMind.

18 August 2017 ProgrammingGodJordan: Repeated "atari q" gibberish when DeepMind is not an Atari machine and has no "q" (does have Q-learning)

18 August 2017 ProgrammingGodJordan: "Supermanifold may encode as "essentially flat euclidean super space"" obsession again.
I translate that as ignorance about supermanifolds. It is a lie I translate that ignorance to "Supermanifolds are euclidean" because you know that I know supermanifolds are not Euclidean.

Alright, you have demonstrated that you lack basic machine learning knowledge.


PART A
You had unavoidably mentioned that "the set of points in the neighborhood of any point in a supermanifold is never Euclidean."


PART B
My prior expression "Deepmind's Atari Q architecture", no where mentioned that Deepmind (A machine learning company) was an "atari machine".

Here are other typical presentation, constituting Deepmind's atari q architecture:

(1) https://github.com/kuz/DeepMind-Atari-Deep-Q-Learner

(2) http://ikuz.eu/2015/02/27/google-dee...r-source-code/


PART C
You had long demonstrated that you lacked basic knowledge in machine learning.

WHY?
You had demonstrated that you hadn't known that deep learning models, could include or exclude pooling layers.

RECALL:




Originally Posted by Reality Check View Post
Into the introduction and:
15 August 2017 ProgrammingGodJordan: Ignorant nonsense about Deepmind.
You are demonstrably wrong, as you will see below.



Originally Posted by Reality Check View Post
Originally Posted by ProgrammingGodJordan
Deepmind’s atari q architecture encompasses non-pooling convolutions
DeepMind is a "neural network that learns how to play video games in a fashion similar to that of humans". It can play several Atari games. It does not have an architecture related to those Atari games. What DeepMind does have is "a convolutional neural network, with a novel form of Q-learning".

What is the relevance of your line above?

Here is a more detailed, intuitive, mathematical description of mine, regarding deepmind's flavour of deep q learning (written in 2016):

https://www.quora.com/Artificial-Int...rdan-Bennett-9




Originally Posted by Reality Check View Post
I have found 1 Google DeepMind paper about the neural network architecture that explicitly includes pooling layers but not as an implemented architecture element, Exploiting Cyclic Symmetry in Convolutional Neural Networks.

What is missing in the PDF is any references for DeepMind.
(1)
My thought curvature paper is unavoidably valid, in expressing that deepmind did not use pooling layers in AtariQ model. (See (2) below).




(2)
Don't you know any machine learning?

Don't you know that convolutional layers can be in a model, without pooling layers?



WHY NO POOLING LAYERS (FOR THIS PARTICULAR SCENARIO)?

In particular, for eg, pooling layers enable translation in-variance, such that object detection can occur, regardless of position in an image. This is why deepmind left it out; the model is quite sensitive to changes in embedding/entities' positions per frame, so the model can reinforce itself by Q-updating.


SOME RESOURCES TO HELP TO PURGE YOUR IGNORANCE:

(a) Deepmind's paper.

(b) If (a) is too abstruse, see this breakdown, why atari q left out pooling layers. (A clear, similar explanation similar to the 'WHY NO POOLING LAYERS (FOR THIS PARTICULAR SCENARIO)?' section above, or as is long written in thought curvature paper)




FOOTNOTE:
It is no surprise that deepmind used pooling in another framework. Pooling layers are used in deep learning all the time, and convolutions can either include, or exclude pooling. (Deep learning basics)






Signature:
ProgrammingGodJordan is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 31st August 2017, 09:17 PM   #106
ProgrammingGodJordan
Banned
 
Join Date: Feb 2017
Location: Jamaica
Posts: 1,718
Originally Posted by Reality Check View Post

From prior threads, you had long demonstrated that you lack basic machine learning knowledge.

For example, you had demonstrated that you hadn't known that deep learning models, could include or exclude pooling layers.

A reminder:




Originally Posted by Reality Check View Post
Into the introduction and:
15 August 2017 ProgrammingGodJordan: Ignorant nonsense about Deepmind.
You are demonstrably wrong, as you will see below.



Originally Posted by Reality Check View Post
Originally Posted by ProgrammingGodJordan
Deepmind’s atari q architecture encompasses non-pooling convolutions
DeepMind is a "neural network that learns how to play video games in a fashion similar to that of humans". It can play several Atari games. It does not have an architecture related to those Atari games. What DeepMind does have is "a convolutional neural network, with a novel form of Q-learning".

What is the relevance of your line above?

Here is a more detailed, intuitive, mathematical description of mine, regarding deepmind's flavour of deep q learning (written in 2016):

https://www.quora.com/Artificial-Int...rdan-Bennett-9




Originally Posted by Reality Check View Post
I have found 1 Google DeepMind paper about the neural network architecture that explicitly includes pooling layers but not as an implemented architecture element, Exploiting Cyclic Symmetry in Convolutional Neural Networks.

What is missing in the PDF is any references for DeepMind.
(1)
My thought curvature paper is unavoidably valid, in expressing that deepmind did not use pooling layers in AtariQ model. (See (2) below).




(2)
Don't you know any machine learning?

Don't you know that convolutional layers can be in a model, without pooling layers?



WHY NO POOLING LAYERS (FOR THIS PARTICULAR SCENARIO)?

In particular, for eg, pooling layers enable translation in-variance, such that object detection can occur, regardless of position in an image. This is why deepmind left it out; the model is quite sensitive to changes in embedding/entities' positions per frame, so the model can reinforce itself by Q-updating.


SOME RESOURCES TO HELP TO PURGE YOUR IGNORANCE:

(a) Deepmind's paper.

(b) If (a) is too abstruse, see this breakdown, why atari q left out pooling layers. (A clear, similar explanation similar to the 'WHY NO POOLING LAYERS (FOR THIS PARTICULAR SCENARIO)?' section above, or as is long written in thought curvature paper)




FOOTNOTE:
It is no surprise that deepmind used pooling in another framework. Pooling layers are used in deep learning all the time, and convolutions can either include, or exclude pooling. (Deep learning basics)


FOOTNOTE:

Of course, even if one lacks official machine learning training (as you clearly demonstrate above), depending on one's field/area of research, one may still contribute.
However this is not the case for you, all your claims of missing citations are invalid, as is demonstrated in the source.



Signature:

Last edited by ProgrammingGodJordan; 31st August 2017 at 09:28 PM.
ProgrammingGodJordan is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 31st August 2017, 09:24 PM   #107
ProgrammingGodJordan
Banned
 
Join Date: Feb 2017
Location: Jamaica
Posts: 1,718
Originally Posted by Argumemnon View Post
His area of research is Truthtm.
Yes, I do science, and science is true.

So, it can be said that my area of research, like that of many scientists, is "truth", for science is true.





Signature:

Last edited by ProgrammingGodJordan; 31st August 2017 at 09:34 PM.
ProgrammingGodJordan is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 31st August 2017, 09:36 PM   #108
ProgrammingGodJordan
Banned
 
Join Date: Feb 2017
Location: Jamaica
Posts: 1,718
Originally Posted by Mojo View Post
The question was perhaps rhetorical.





Signature:
ProgrammingGodJordan is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 31st August 2017, 10:06 PM   #109
RussDill
Philosopher
 
Join Date: Oct 2003
Location: Charleston
Posts: 5,399
Originally Posted by ProgrammingGodJordan View Post
My area of research is computer science, particularly in Artificial Intelligence.

I am not trained in machine learning, university-wise, but I do research anyway.
Then if you have these questions and it's an area you either understand, or are thoroughly motivated to understand, why not answer them by experiment?

If you understand the topic well enough to do the experiments, then do them. If you don't, then you are arguing about a topic from a point of ignorance. Your time would be far better served by learning the topic sufficiently to answer the questions.
__________________
The woods are lovely, dark and deep
but i have promises to keep
and lines to code before I sleep
And lines to code before I sleep
RussDill is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 31st August 2017, 11:19 PM   #110
Mojo
Mostly harmless
 
Mojo's Avatar
 
Join Date: Jul 2004
Posts: 29,317
Originally Posted by ProgrammingGodJordan View Post
The question was perhaps rhetorical.

The question was perhaps incoherent.
__________________
"You got to use your brain." - McKinley Morganfield

"The poor mystic homeopaths feel like petted house-cats thrown at high flood on the breaking ice." - Leon Trotsky
Mojo is online now   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 1st September 2017, 12:35 AM   #111
ProgrammingGodJordan
Banned
 
Join Date: Feb 2017
Location: Jamaica
Posts: 1,718
Originally Posted by RussDill View Post
Then if you have these questions and it's an area you either understand, or are thoroughly motivated to understand, why not answer them by experiment?

If you understand the topic well enough to do the experiments, then do them. If you don't, then you are arguing about a topic from a point of ignorance. Your time would be far better served by learning the topic sufficiently to answer the questions.
There are particular limits, that I currently aim to resolve:

(1) I don't have access to google-level gpus for the purpose of rapid experimentation.

(2) I don't have the depth of knowledge that a phd pioneer like Yoshua Bengio would possess, especially, given the nature of my university's sub-optimal AI course.



FOOTNOTE:
(i) Despite (2), it is not inconceivable that I can detect regimes, that phd aligned machine learning people may miss.

For example (unlike state of the art related works), I consider machine learning algebra, as it relates to cognitive science. Bengio's works, especially concerning manifolds, do not yet? entirely compound cognitive science, as cognitive science entails supersymmetry/supermanifolds, which Bengio's work does not entail.

(ii) Likewise state of the art work, such as deepmind's works on manifolds do not yet? entail cognitive science, in entirety, although deepmind tends to consider boundaries amidst cognitive science.


(iii) Regardless of (2) though, I have communicated with Bengio, in order to compose the thought curvature paper.

As such, although thought curvature does not yet compound encodings that are experimentally observable, it does express valid machine learning aligned algebra, especially on the horizon of empirical evidence, on which future work may occur.

EXAMPLES OF COMMUNICATIONS WITH BENGIO:










Signature:

Last edited by ProgrammingGodJordan; 1st September 2017 at 12:37 AM.
ProgrammingGodJordan is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 1st September 2017, 04:23 AM   #112
halleyscomet
Philosopher
 
halleyscomet's Avatar
 
Join Date: Dec 2012
Posts: 7,474
Originally Posted by ProgrammingGodJordan View Post
Yes, I do science, and science is true.

So, it can be said that my area of research, like that of many scientists, is "truth", for science is true.





Signature:


I recently posted in another of your bovine excrement threads where you redefined "God" into a useless term, that "Science" had been redefined as "Medieval European Alchemy." I'm now expanding that definition to ALL threads in which you and I are participants.

Adjust your Dunning/Kruger discussion of AI and machine learning accordingly.

How do the Philosopher's Stone and the transmutation of metals fit into your model?
halleyscomet is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 1st September 2017, 04:26 AM   #113
ProgrammingGodJordan
Banned
 
Join Date: Feb 2017
Location: Jamaica
Posts: 1,718
Originally Posted by halleyscomet View Post
I recently posted in another of your bovine excrement threads where you redefined "God" into a useless term, that "Science" had been redefined as "Medieval European Alchemy." I'm now expanding that definition to ALL threads in which you and I are participants.

Adjust your Dunning/Kruger discussion of AI and machine learning accordingly.

How do the Philosopher's Stone and the transmutation of metals fit into your model?
I don't detect any sensible data amidst your response.



FOOTNOTE:
Curiously, how does Dunning/Kruger supposedly apply to a being (i.e. myself), who aims to acquire a lot more scientific data?





Signature:

Last edited by ProgrammingGodJordan; 1st September 2017 at 04:30 AM.
ProgrammingGodJordan is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 1st September 2017, 04:31 AM   #114
Argumemnon
World Maker
 
Argumemnon's Avatar
 
Join Date: Oct 2005
Location: In the thick of things
Posts: 67,304
Originally Posted by ProgrammingGodJordan View Post
Yes, I do science
No you don't. You have no understanding of what that word means.
__________________
<Roar!>

Argumemnon is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 1st September 2017, 04:36 AM   #115
ProgrammingGodJordan
Banned
 
Join Date: Feb 2017
Location: Jamaica
Posts: 1,718
Originally Posted by Argumemnon View Post
No you don't. You have no understanding of what that word means.
Have you anything valid to express, beyond non-evidenced blather?


FOOTNOTE:
I am off to slumber, so I shan't yet have the opportunity to observe a valid response that you may later (or at all?) write here.




Signature:

Last edited by ProgrammingGodJordan; 1st September 2017 at 04:38 AM.
ProgrammingGodJordan is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 1st September 2017, 04:39 AM   #116
halleyscomet
Philosopher
 
halleyscomet's Avatar
 
Join Date: Dec 2012
Posts: 7,474
Originally Posted by ProgrammingGodJordan View Post
I don't detect any sensible data amidst your response.

Now you know how we feel reading your posts.


Originally Posted by ProgrammingGodJordan View Post
FOOTNOTE:
Curiously, how does Dunning/Kruger supposedly apply to a being (i.e. myself), who aims to acquire a lot more scientific data?

You have no idea what you're writing about. You don't understand any of the concepts you're onanizing on. You cover up your complete lack of comprehension with arrogance and poor writing but nobody is fooled.

You are accumulating data but not understanding any of it. You are comparable to an illiterate man with a massive library, bragging about how educated he is because of the massive library he cannot read.

You should be seeking understanding, not accumulating more buzzwords to throw into your word salads.
halleyscomet is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 1st September 2017, 04:43 AM   #117
halleyscomet
Philosopher
 
halleyscomet's Avatar
 
Join Date: Dec 2012
Posts: 7,474
Originally Posted by ProgrammingGodJordan View Post
Have you anything valid to express, beyond non-evidenced blather?

You ask a question you should answer yourself.
halleyscomet is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 1st September 2017, 04:51 AM   #118
ProgrammingGodJordan
Banned
 
Join Date: Feb 2017
Location: Jamaica
Posts: 1,718
Originally Posted by halleyscomet View Post
Now you know how we feel reading your posts.





You have no idea what you're writing about. You don't understand any of the concepts you're onanizing on. You cover up your complete lack of comprehension with arrogance and poor writing but nobody is fooled.

You are accumulating data but not understanding any of it. You are comparable to an illiterate man with a massive library, bragging about how educated he is because of the massive library he cannot read.

You should be seeking understanding, not accumulating more buzzwords to throw into your word salads.
Instead of blathering on absent evidence, it is pertinent that you perhaps demonstrate how I supposedly fail to present valid data.





Signature:
ProgrammingGodJordan is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 1st September 2017, 04:52 AM   #119
ProgrammingGodJordan
Banned
 
Join Date: Feb 2017
Location: Jamaica
Posts: 1,718
Originally Posted by halleyscomet View Post
You ask a question you should answer yourself.
So, you are still yet to present any data, beyond non-evidenced blather.
I ponder why?




Signature:
ProgrammingGodJordan is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 1st September 2017, 04:57 AM   #120
Argumemnon
World Maker
 
Argumemnon's Avatar
 
Join Date: Oct 2005
Location: In the thick of things
Posts: 67,304
Originally Posted by ProgrammingGodJordan View Post
Have you anything valid to express, beyond non-evidenced blather?
I do have evidence for my claim: all of your posts.

You're basically posting your musings. What do you expect as a rebuttal?
__________________
<Roar!>

Argumemnon is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Reply

International Skeptics Forum » General Topics » Science, Mathematics, Medicine, and Technology

Bookmarks

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump


All times are GMT -7. The time now is 11:49 PM.
Powered by vBulletin. Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
© 2014, TribeTech AB. All Rights Reserved.
This forum began as part of the James Randi Education Foundation (JREF). However, the forum now exists as
an independent entity with no affiliation with or endorsement by the JREF, including the section in reference to "JREF" topics.

Disclaimer: Messages posted in the Forum are solely the opinion of their authors.