ISF Logo   IS Forum
Forum Index Register Members List Events Mark Forums Read Help

Go Back   International Skeptics Forum » General Topics » Science, Mathematics, Medicine, and Technology
 


Welcome to the International Skeptics Forum, where we discuss skepticism, critical thinking, the paranormal and science in a friendly but lively way. You are currently viewing the forum as a guest, which means you are missing out on discussing matters that are of interest to you. Please consider registering so you can gain full use of the forum features and interact with other Members. Registration is simple, fast and free! Click here to register today.
Tags artificial intelligence , consciousness , Edward Witten , Max Tegmark

Reply
Old 15th August 2017, 09:32 PM   #81
Reality Check
Penultimate Amazing
 
Join Date: Mar 2008
Location: New Zealand
Posts: 19,991
16 August 2017 ProgrammingGodJordan: Thought Curvature DeepMind bad scholarship (no citations) and some incoherence
PDF
Quote:
Deepmind’s atari q architecture encompasses non-pooling convolutions, therein generating object shift sensitivity, whence the model maximizes some reward over said shifts together with separate changing states for each sampled t state; translation non-invariance
I have covered the "atari q" nonsense (no Atari or q architecture for DeepMind playing atria games using Q-learning). There is the bad scholarship of no supporting citations and some incoherence. This may be an attempt to say that DeepMind recognizes moving objects such as sprites in a video game.
Reality Check is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 15th August 2017, 09:33 PM   #82
ProgrammingGodJordan
Suspended
 
Join Date: Feb 2017
Posts: 1,290
Originally Posted by Reality Check View Post
16 August 2017 ProgrammingGodJordan: Demonstrates an inability to read - my post was about other Grassmanian nonsense he posted!
15 August 2017 ProgrammingGodJordan: Grassmann number ignorance and nonsense. is about nonsense n a 30 March 2017 post.
You must observe by now, that supermanifolds may bear euclidean behaviour. (See euclidean supermanifold reference)

Where the above is valid, grassmann algebra need not apply, as long stated.
ProgrammingGodJordan is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 15th August 2017, 09:38 PM   #83
Reality Check
Penultimate Amazing
 
Join Date: Mar 2008
Location: New Zealand
Posts: 19,991
Originally Posted by ProgrammingGodJordan View Post
Typo Correction, ...temporal difference learning paradigm representing distributions over eta.
ProgrammingGodJordan, you linked to an irrelevant Wikipedia article, unless you are doing numerical simulations of fluids.
Direct numerical simulation
Quote:
A direct numerical simulation (DNS)[1] is a simulation in computational fluid dynamics in which the Navier–Stokes equations are numerically solved without any turbulence model.
Reality Check is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 15th August 2017, 09:38 PM   #84
ProgrammingGodJordan
Suspended
 
Join Date: Feb 2017
Posts: 1,290
Originally Posted by Reality Check View Post
16 August 2017 ProgrammingGodJordan: Thought Curvature DeepMind bad scholarship (no citations) and some incoherence
PDF

I have covered the "atari q" nonsense (no Atari or q architecture for DeepMind playing atria games using Q-learning). There is the bad scholarship of no supporting citations and some incoherence. This may be an attempt to say that DeepMind recognizes moving objects such as sprites in a video game.
Wrong.

It is no fault of mine, that you are unable to reduce basic English.

Anyway, it was you that expressed nonsense:

Originally Posted by Reality Check View Post
Originally Posted by ProgrammingGodJordan
Deepmind’s atari q architecture encompasses non-pooling convolutions
I have found 1 Google DeepMind paper about the neural network architecture that explicitly includes pooling layers but not as an implemented architecture element, Exploiting Cyclic Symmetry in Convolutional Neural Networks.

What is missing in the PDF is any references for DeepMind.
You falsely believed that pooling layers were crucial to models with convolutional layers, even despite the fact that atari Q did not include any such pooling layer.

The evidence is clearly observable:





Originally Posted by Reality Check View Post
Into the introduction and:
15 August 2017 ProgrammingGodJordan: Ignorant nonsense about Deepmind.
You are demonstrably wrong, as you will see below.



Originally Posted by Reality Check View Post
DeepMind is a "neural network that learns how to play video games in a fashion similar to that of humans". It can play several Atari games. It does not have an architecture related to those Atari games. What DeepMind does have is "a convolutional neural network, with a novel form of Q-learning".

What is the relevance of your line above?

Here is a more detailed, intuitive, mathematical description of mine, regarding deepmind's flavour of deep q learning (written in 2016):

https://www.quora.com/Artificial-Int...rdan-Bennett-9




Originally Posted by Reality Check View Post
I have found 1 Google DeepMind paper about the neural network architecture that explicitly includes pooling layers but not as an implemented architecture element, Exploiting Cyclic Symmetry in Convolutional Neural Networks.

What is missing in the PDF is any references for DeepMind.
(1)
My thought curvature paper is unavoidably valid, in expressing that deepmind did not use pooling layers in AtariQ model. (See (2) below).




(2)
Don't you know any machine learning?

Don't you know that convolutional layers can be in a model, without pooling layers?



WHY NO POOLING LAYERS (FOR THIS PARTICULAR SCENARIO)?

In particular, for eg, pooling layers enable translation in-variance, such that object detection can occur, regardless of position in an image. This is why deepmind left it out; the model is quite sensitive to changes in embedding/entities' positions per frame, so the model can reinforce itself by Q-updating.


SOME RESOURCES TO HELP TO PURGE YOUR IGNORANCE:

(a) Deepmind's paper.

(b) If (a) is too abstruse, see this breakdown, why atari q left out pooling layers. (A clear, similar explanation similar to the 'WHY NO POOLING LAYERS (FOR THIS PARTICULAR SCENARIO)?' section above, or as is long written in thought curvature paper)




FOOTNOTE:
It is no surprise that deepmind used pooling in another framework. Pooling layers are used in deep learning all the time, and convolutions can either include, or exclude pooling. (Deep learning basics)

Last edited by ProgrammingGodJordan; 15th August 2017 at 09:39 PM.
ProgrammingGodJordan is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 15th August 2017, 09:39 PM   #85
Reality Check
Penultimate Amazing
 
Join Date: Mar 2008
Location: New Zealand
Posts: 19,991
Originally Posted by ProgrammingGodJordan View Post
You must observe by ....
I have observed that you cannot understand what you read, specifically that post:
15 August 2017 ProgrammingGodJordan: Grassmann number ignorance and nonsense.
16 August 2017 ProgrammingGodJordan: Demonstrates an inability to read - my post was about other Grassmanian nonsense he posted!
Reality Check is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 15th August 2017, 09:45 PM   #86
ProgrammingGodJordan
Suspended
 
Join Date: Feb 2017
Posts: 1,290
Originally Posted by Reality Check View Post
ProgrammingGodJordan, you linked to an irrelevant Wikipedia article, unless you are doing numerical simulations of fluids.
Direct numerical simulation
See thought curvature paper.
That eta is related to this, as presented there.
ProgrammingGodJordan is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 15th August 2017, 10:16 PM   #87
ProgrammingGodJordan
Suspended
 
Join Date: Feb 2017
Posts: 1,290



PART A

It's time to escape that onset of self-denial Reality Check.

Okay, let us unravel your errors:

(1) Why did you lie and express that 'any point in a supermanifold...is never euclidean', despite contrasting scientific evidence?

(2) Why ignore that you hadn't known that deep learning models, could include or exclude pooling layers?

(3) From your blunder in (2) above, why ignore that atari q did not include pooling for pretty clear reinforcement learning reasons (as I had long expressed in my thought curvature paper)?

(4) Why continuously accuse me of supposedly expressing that 'all super-manifolds were locally euclidean' contrary to contrasting evidence? Why do my words "Supermanifold may encode as "essentially flat euclidean super space" fabric" translate strictly to "Supermanifolds are euclidean" to you?
(accusation source 1, accusation source 2, accusation source 3)





PART B

Why Reality Check was wrong (relating to question 1):


Why Reality Check was wrong, (relating to question 2 and 3):




Originally Posted by Reality Check View Post
Into the introduction and:
15 August 2017 ProgrammingGodJordan: Ignorant nonsense about Deepmind.
You are demonstrably wrong, as you will see below.



Originally Posted by Reality Check View Post
Originally Posted by ProgrammingGodJordan
Deepmind’s atari q architecture encompasses non-pooling convolutions
DeepMind is a "neural network that learns how to play video games in a fashion similar to that of humans". It can play several Atari games. It does not have an architecture related to those Atari games. What DeepMind does have is "a convolutional neural network, with a novel form of Q-learning".

What is the relevance of your line above?

Here is a more detailed, intuitive, mathematical description of mine, regarding deepmind's flavour of deep q learning (written in 2016):

https://www.quora.com/Artificial-Int...rdan-Bennett-9




Originally Posted by Reality Check View Post
I have found 1 Google DeepMind paper about the neural network architecture that explicitly includes pooling layers but not as an implemented architecture element, Exploiting Cyclic Symmetry in Convolutional Neural Networks.

What is missing in the PDF is any references for DeepMind.
(1)
My thought curvature paper is unavoidably valid, in expressing that deepmind did not use pooling layers in AtariQ model. (See (2) below).




(2)
Don't you know any machine learning?

Don't you know that convolutional layers can be in a model, without pooling layers?



WHY NO POOLING LAYERS (FOR THIS PARTICULAR SCENARIO)?

In particular, for eg, pooling layers enable translation in-variance, such that object detection can occur, regardless of position in an image. This is why deepmind left it out; the model is quite sensitive to changes in embedding/entities' positions per frame, so the model can reinforce itself by Q-updating.


SOME RESOURCES TO HELP TO PURGE YOUR IGNORANCE:

(a) Deepmind's paper.

(b) If (a) is too abstruse, see this breakdown, why atari q left out pooling layers. (A clear, similar explanation similar to the 'WHY NO POOLING LAYERS (FOR THIS PARTICULAR SCENARIO)?' section above, or as is long written in thought curvature paper)




FOOTNOTE:
It is no surprise that deepmind used pooling in another framework. Pooling layers are used in deep learning all the time, and convolutions can either include, or exclude pooling. (Deep learning basics)


Why Reality Check was wrong (relating to question 4):


Originally Posted by Reality Check View Post
No where had I supposedly stated that "all supermanifolds are locally Euclidean".

In fact, my earlier post (which preceded your accusation above) clearly expressed that "Supermanifold may encode as 'essentially flat euclidean super space' fabric".

No where above expresses that all supermanifolds were locally euclidean. Why bother to lie?

Last edited by ProgrammingGodJordan; 15th August 2017 at 10:18 PM.
ProgrammingGodJordan is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 15th August 2017, 10:23 PM   #88
ProgrammingGodJordan
Suspended
 
Join Date: Feb 2017
Posts: 1,290
Originally Posted by Reality Check View Post
You need observe once more, my prior quote:

Originally Posted by ProgrammingGodJordan
You must observe by now, that supermanifolds may bear euclidean behaviour. (See euclidean supermanifold reference)

Where the above is valid, grassmann algebra need not apply, as long stated.
Otherwise, why bother to ignore the evidence?

How shall ignoring the evidence benefit your education?
ProgrammingGodJordan is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 16th August 2017, 01:44 AM   #89
Roboramma
Philosopher
 
Roboramma's Avatar
 
Join Date: Feb 2005
Location: Shanghai
Posts: 9,645
Originally Posted by ProgrammingGodJordan View Post
Irrelevant. Max Tegmark, is also a physicist, that has not undergone official artificial intelligence training, and yet, he has already contributed important work in the field of machine learning.

Tegmark presents consciousness as a mathematical problem, while Witten presents it as a likely forever unsolvable mystery.
I didn't suggest that being a physicist would prevent him from making contributions to AI. I suggested that it wouldn't guarantee that he would. Showing that other physicists have made such contributions would address the first argument, but not the second.

Similarly, people who wear red hats aren't necessarily going to be able to make breakthroughs in AI. Finding a picture of an AI researcher who has made breakthroughs wearing a red hat wouldn't change that fact.




Quote:
It is unavoidable, he could contribute; manifolds (something Edward works on) applies empirically in machine learning.

One need not be a nobel prize winning physicist to observe the above.
I actually think that it's reasonable to think he might be able to make some sort of a contribution, though I wouldn't wager whether it would be large or small. But you haven't addressed the point that his time is finite. He can either spend any particular minute of his time thinking about and working on physics or on AI, but not both. Again, I suspect that he is the best judge of how that time is best spent.
__________________
"... when people thought the Earth was flat, they were wrong. When people thought the Earth was spherical they were wrong. But if you think that thinking the Earth is spherical is just as wrong as thinking the Earth is flat, then your view is wronger than both of them put together."
Isaac Asimov
Roboramma is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 16th August 2017, 11:42 AM   #90
ProgrammingGodJordan
Suspended
 
Join Date: Feb 2017
Posts: 1,290
Originally Posted by Roboramma View Post


I actually think that it's reasonable to think he might be able to make some sort of a contribution, though I wouldn't wager whether it would be large or small. But you haven't addressed the point that his time is finite. He can either spend any particular minute of his time thinking about and working on physics or on AI, but not both. Again, I suspect that he is the best judge of how that time is best spent.
Consider a prior quote of mine, you may have missed:

Originally Posted by ProgrammingGodJordan
It is noteworthy that physicists aim to unravel the cosmos' mysteries, and so it is a mystery as to why Witten would select not to partake amidst the active machine learning field, especially given that:

(1) Manifolds apply non-trivially in machine learning.

(2) AI is one of mankind's most profound tools.

(3) AI is already performing nobel prize level tasks, very very efficiently.

(4) AI may need only be mankind's last invention.
ProgrammingGodJordan is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 17th August 2017, 07:34 PM   #91
Reality Check
Penultimate Amazing
 
Join Date: Mar 2008
Location: New Zealand
Posts: 19,991
Thumbs down ProgrammingGodJordan: Thought Curvature uetorch bad scholarship (no citations)

18 August 2017 ProgrammingGodJordan: Thought Curvature uetorch bad scholarship (no citations) and incoherence
PDF
Quote:
Separately, uetorch, encodes an object trajectory behaviour physics learner, particularly on pooling layers; translation invariance
A mish mash of words not meaning much.
There is a "uetorch" open source environment using the Torch deep learning environment.

Last edited by Reality Check; 17th August 2017 at 07:35 PM.
Reality Check is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 17th August 2017, 07:51 PM   #92
Reality Check
Penultimate Amazing
 
Join Date: Mar 2008
Location: New Zealand
Posts: 19,991
Thumbs down ProgrammingGodJordan: Thought Curvature irrelevant "childhood neocortical framework"

18 August 2017 ProgrammingGodJordan: Thought Curvature irrelevant "childhood neocortical framework" sentence and missing citation.
PDF
Quote:
It is non-abstrusely observable, that the childhood neocortical framework pre-encodes certain causal physical laws in the neurons (Stahl et al), amalgamating in perceptual learning abstractions into non-childhood.
That sentence is the only "Stahl" on the web page displaying the PDF!
I am getting the impression that English is a second language for the author or they are stringing together science words and thinking it makes sense.
Reality Check is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 17th August 2017, 07:58 PM   #93
Reality Check
Penultimate Amazing
 
Join Date: Mar 2008
Location: New Zealand
Posts: 19,991
Thumbs down ProgrammingGodJordan: Thought Curvature "non-invariant fabric" gibberish

18 August 2017 ProgrammingGodJordan: Thought Curvature "non-invariant fabric" gibberish.
PDF
Quote:
As such, it is perhaps exigent that non-invariant fabric composes in the invariant, therein engendering time-space complex optimal causal, conscious artificial construction. If this confluence is reasonable, is such paradoxical?
Everyone can read that this paragraph is gibberish and invalid English.
A total non sequitur (not "As such" ) into "fabric".
Reality Check is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 17th August 2017, 08:02 PM   #94
Reality Check
Penultimate Amazing
 
Join Date: Mar 2008
Location: New Zealand
Posts: 19,991
Thumbs down ProgrammingGodJordan: Thought Curvature Partial paradox reduction gibberish

18 August 2017 ProgrammingGodJordan: Thought Curvature Partial paradox reduction gibberish and missing citations.
PDF
Quote:
Partial paradox reduction
Paradoxical strings have been perturbed to reduce in factor variant/invariant manifold interaction paradigms (Bengio et al, Kihyuk et al), that effectively learn to disentangle varying factors.
Reality Check is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 17th August 2017, 08:34 PM   #95
Reality Check
Penultimate Amazing
 
Join Date: Mar 2008
Location: New Zealand
Posts: 19,991
Thumbs down ProgrammingGodJordan: A lie about what I wrote in a post

A crazily formatted post leads to:
18 August 2017 ProgrammingGodJordan: A lie about what I wrote in a post.
I did not write 'any point in a supermanifold...is never euclidean' in my 29th March 2017
Quote:
Repeating ignorance about supermanifolds does not change that they are not locally Euclidean as everyone reads that Wikipedia article you cited understands.
Locally means a small region.
For others:
A point in a supermanifold has non-Euclidean components and so cannot be Euclidean.
Roger Penrose has a few pages on supermanifolds in 'The Road To Reality' and (N.B. from memory) gives the simplest example: Real numbers R with an anti-commuting generator ε "where εε = - εε whence ε2 = 0". For every a and b in R there is a corresponding a + εb. I visualize this as extending R into a very weird plane.

18 August 2017 ProgrammingGodJordan: A fantasy that I did not know deep learning models could include or exclude pooling layers.
15 August 2017 ProgrammingGodJordan: Ignorant nonsense about Deepmind
Quote:
DeepMind is a "neural network that learns how to play video games in a fashion similar to that of humans". It can play several Atari games. It does not have an architecture related to those Atari games. What DeepMind does have is "a convolutional neural network, with a novel form of Q-learning". I have found 1 Google DeepMind paper about the neural network architecture that explicitly includes pooling layers but not as an implemented architecture element, Exploiting Cyclic Symmetry in Convolutional Neural Networks.
I already knew about their use in convolutional neural networks so I went looking for their possible use for DeepMind.

18 August 2017 ProgrammingGodJordan: Repeated "atari q" gibberish when DeepMind is not an Atari machine and has no "q" (does have Q-learning)

18 August 2017 ProgrammingGodJordan: "Supermanifold may encode as "essentially flat euclidean super space"" obsession again.
I translate that as ignorance about supermanifolds. It is a lie I translate that ignorance to "Supermanifolds are euclidean" because you know that I know supermanifolds are not Euclidean.

Last edited by Reality Check; 17th August 2017 at 08:59 PM.
Reality Check is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Reply

International Skeptics Forum » General Topics » Science, Mathematics, Medicine, and Technology

Bookmarks

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump


All times are GMT -7. The time now is 09:20 PM.
Powered by vBulletin. Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
© 2014, TribeTech AB. All Rights Reserved.
This forum began as part of the James Randi Education Foundation (JREF). However, the forum now exists as
an independent entity with no affiliation with or endorsement by the JREF, including the section in reference to "JREF" topics.

Disclaimer: Messages posted in the Forum are solely the opinion of their authors.