Merged Artificial Intelligence Research: Supermathematics and Physics

16 August 2017 ProgrammingGodJordan: Thought Curvature DeepMind bad scholarship (no citations) and some incoherence
PDF
Deepmind’s atari q architecture encompasses non-pooling convolutions, therein generating object shift sensitivity, whence the model maximizes some reward over said shifts together with separate changing states for each sampled t state; translation non-invariance
I have covered the "atari q" nonsense (no Atari or q architecture for DeepMind playing atria games using Q-learning). There is the bad scholarship of no supporting citations and some incoherence. This may be an attempt to say that DeepMind recognizes moving objects such as sprites in a video game.
 
16 August 2017 ProgrammingGodJordan: Thought Curvature DeepMind bad scholarship (no citations) and some incoherence
PDF

I have covered the "atari q" nonsense (no Atari or q architecture for DeepMind playing atria games using Q-learning). There is the bad scholarship of no supporting citations and some incoherence. This may be an attempt to say that DeepMind recognizes moving objects such as sprites in a video game.

Wrong.

It is no fault of mine, that you are unable to reduce basic English.

Anyway, it was you that expressed nonsense:

ProgrammingGodJordan said:
Deepmind’s atari q architecture encompasses non-pooling convolutions
I have found 1 Google DeepMind paper about the neural network architecture that explicitly includes pooling layers but not as an implemented architecture element, Exploiting Cyclic Symmetry in Convolutional Neural Networks.

What is missing in the PDF is any references for DeepMind.

You falsely believed that pooling layers were crucial to models with convolutional layers, even despite the fact that atari Q did not include any such pooling layer.

The evidence is clearly observable:

[imgw=150]http://i.imgur.com/JYrZOW4.jpg[/imgw]


Into the introduction and:
15 August 2017 ProgrammingGodJordan: Ignorant nonsense about Deepmind.

You are demonstrably wrong, as you will see below.



DeepMind is a "neural network that learns how to play video games in a fashion similar to that of humans". It can play several Atari games. It does not have an architecture related to those Atari games. What DeepMind does have is "a convolutional neural network, with a novel form of Q-learning".


What is the relevance of your line above?

Here is a more detailed, intuitive, mathematical description of mine, regarding deepmind's flavour of deep q learning (written in 2016):

https://www.quora.com/Artificial-In...p-Q-networks-DQN-work/answer/Jordan-Bennett-9




I have found 1 Google DeepMind paper about the neural network architecture that explicitly includes pooling layers but not as an implemented architecture element, Exploiting Cyclic Symmetry in Convolutional Neural Networks.

What is missing in the PDF is any references for DeepMind.

(1)
My thought curvature paper is unavoidably valid, in expressing that deepmind did not use pooling layers in AtariQ model. (See (2) below).




(2)
Don't you know any machine learning?

Don't you know that convolutional layers can be in a model, without pooling layers?
PaUaBx9.png


WHY NO POOLING LAYERS (FOR THIS PARTICULAR SCENARIO)?

In particular, for eg, pooling layers enable translation in-variance, such that object detection can occur, regardless of position in an image. This is why deepmind left it out; the model is quite sensitive to changes in embedding/entities' positions per frame, so the model can reinforce itself by Q-updating.


SOME RESOURCES TO HELP TO PURGE YOUR IGNORANCE:

(a) Deepmind's paper.

(b) If (a) is too abstruse, see this breakdown, why atari q left out pooling layers. (A clear, similar explanation similar to the 'WHY NO POOLING LAYERS (FOR THIS PARTICULAR SCENARIO)?' section above, or as is long written in thought curvature paper)




FOOTNOTE:
It is no surprise that deepmind used pooling in another framework. Pooling layers are used in deep learning all the time, and convolutions can either include, or exclude pooling. (Deep learning basics)
 
Last edited:
[IMGw=180]http://i.imgur.com/MyFzMcl.jpg[/IMGw]


PART A

It's time to escape that onset of self-denial Reality Check.

Okay, let us unravel your errors:

(1) Why did you lie and express that 'any point in a supermanifold...is never euclidean', despite contrasting scientific evidence?

(2) Why ignore that you hadn't known that deep learning models, could include or exclude pooling layers?

(3) From your blunder in (2) above, why ignore that atari q did not include pooling for pretty clear reinforcement learning reasons (as I had long expressed in my thought curvature paper)?

(4) Why continuously accuse me of supposedly expressing that 'all super-manifolds were locally euclidean' contrary to contrasting evidence? Why do my words "Supermanifold may encode as "essentially flat euclidean super space" fabric" translate strictly to "Supermanifolds are euclidean" to you?
(accusation source 1, accusation source 2, accusation source 3)





PART B

Why Reality Check was wrong (relating to question 1):


Why Reality Check was wrong, (relating to question 2 and 3):

[imgw=150]http://i.imgur.com/JYrZOW4.jpg[/imgw]


Into the introduction and:
15 August 2017 ProgrammingGodJordan: Ignorant nonsense about Deepmind.

You are demonstrably wrong, as you will see below.



ProgrammingGodJordan said:
Deepmind’s atari q architecture encompasses non-pooling convolutions
DeepMind is a "neural network that learns how to play video games in a fashion similar to that of humans". It can play several Atari games. It does not have an architecture related to those Atari games. What DeepMind does have is "a convolutional neural network, with a novel form of Q-learning".


What is the relevance of your line above?

Here is a more detailed, intuitive, mathematical description of mine, regarding deepmind's flavour of deep q learning (written in 2016):

https://www.quora.com/Artificial-In...p-Q-networks-DQN-work/answer/Jordan-Bennett-9




I have found 1 Google DeepMind paper about the neural network architecture that explicitly includes pooling layers but not as an implemented architecture element, Exploiting Cyclic Symmetry in Convolutional Neural Networks.

What is missing in the PDF is any references for DeepMind.

(1)
My thought curvature paper is unavoidably valid, in expressing that deepmind did not use pooling layers in AtariQ model. (See (2) below).




(2)
Don't you know any machine learning?

Don't you know that convolutional layers can be in a model, without pooling layers?
PaUaBx9.png


WHY NO POOLING LAYERS (FOR THIS PARTICULAR SCENARIO)?

In particular, for eg, pooling layers enable translation in-variance, such that object detection can occur, regardless of position in an image. This is why deepmind left it out; the model is quite sensitive to changes in embedding/entities' positions per frame, so the model can reinforce itself by Q-updating.


SOME RESOURCES TO HELP TO PURGE YOUR IGNORANCE:

(a) Deepmind's paper.

(b) If (a) is too abstruse, see this breakdown, why atari q left out pooling layers. (A clear, similar explanation similar to the 'WHY NO POOLING LAYERS (FOR THIS PARTICULAR SCENARIO)?' section above, or as is long written in thought curvature paper)




FOOTNOTE:
It is no surprise that deepmind used pooling in another framework. Pooling layers are used in deep learning all the time, and convolutions can either include, or exclude pooling. (Deep learning basics)

Why Reality Check was wrong (relating to question 4):



No where had I supposedly stated that "all supermanifolds are locally Euclidean".

In fact, my earlier post (which preceded your accusation above) clearly expressed that "Supermanifold may encode as 'essentially flat euclidean super space' fabric".

No where above expresses that all supermanifolds were locally euclidean. Why bother to lie?
 
Last edited:

You need observe once more, my prior quote:

ProgrammingGodJordan said:
You must observe by now, that supermanifolds may bear euclidean behaviour. (See euclidean supermanifold reference)

Where the above is valid, grassmann algebra need not apply, as long stated.

Otherwise, why bother to ignore the evidence?

How shall ignoring the evidence benefit your education?
 
Irrelevant. Max Tegmark, is also a physicist, that has not undergone official artificial intelligence training, and yet, he has already contributed important work in the field of machine learning.

Tegmark presents consciousness as a mathematical problem, while Witten presents it as a likely forever unsolvable mystery.
I didn't suggest that being a physicist would prevent him from making contributions to AI. I suggested that it wouldn't guarantee that he would. Showing that other physicists have made such contributions would address the first argument, but not the second.

Similarly, people who wear red hats aren't necessarily going to be able to make breakthroughs in AI. Finding a picture of an AI researcher who has made breakthroughs wearing a red hat wouldn't change that fact.




It is unavoidable, he could contribute; manifolds (something Edward works on) applies empirically in machine learning.

One need not be a nobel prize winning physicist to observe the above.

I actually think that it's reasonable to think he might be able to make some sort of a contribution, though I wouldn't wager whether it would be large or small. But you haven't addressed the point that his time is finite. He can either spend any particular minute of his time thinking about and working on physics or on AI, but not both. Again, I suspect that he is the best judge of how that time is best spent.
 
I actually think that it's reasonable to think he might be able to make some sort of a contribution, though I wouldn't wager whether it would be large or small. But you haven't addressed the point that his time is finite. He can either spend any particular minute of his time thinking about and working on physics or on AI, but not both. Again, I suspect that he is the best judge of how that time is best spent.

Consider a prior quote of mine, you may have missed:

ProgrammingGodJordan said:
It is noteworthy that physicists aim to unravel the cosmos' mysteries, and so it is a mystery as to why Witten would select not to partake amidst the active machine learning field, especially given that:

(1) Manifolds apply non-trivially in machine learning.

(2) AI is one of mankind's most profound tools.

(3) AI is already performing nobel prize level tasks, very very efficiently.

(4) AI may need only be mankind's last invention.
 
ProgrammingGodJordan: Thought Curvature uetorch bad scholarship (no citations)

18 August 2017 ProgrammingGodJordan: Thought Curvature uetorch bad scholarship (no citations) and incoherence
PDF
Separately, uetorch, encodes an object trajectory behaviour physics learner, particularly on pooling layers; translation invariance
A mish mash of words not meaning much.
There is a "uetorch" open source environment using the Torch deep learning environment.
 
Last edited:
ProgrammingGodJordan: Thought Curvature irrelevant "childhood neocortical framework"

18 August 2017 ProgrammingGodJordan: Thought Curvature irrelevant "childhood neocortical framework" sentence and missing citation.
PDF
It is non-abstrusely observable, that the childhood neocortical framework pre-encodes certain causal physical laws in the neurons (Stahl et al), amalgamating in perceptual learning abstractions into non-childhood.
That sentence is the only "Stahl" on the web page displaying the PDF!
I am getting the impression that English is a second language for the author or they are stringing together science words and thinking it makes sense.
 
ProgrammingGodJordan: Thought Curvature "non-invariant fabric" gibberish

18 August 2017 ProgrammingGodJordan: Thought Curvature "non-invariant fabric" gibberish.
PDF
As such, it is perhaps exigent that non-invariant fabric composes in the invariant, therein engendering time-space complex optimal causal, conscious artificial construction. If this confluence is reasonable, is such paradoxical?
Everyone can read that this paragraph is gibberish and invalid English.
A total non sequitur (not "As such" :eye-poppi) into "fabric".
 
ProgrammingGodJordan: Thought Curvature Partial paradox reduction gibberish

18 August 2017 ProgrammingGodJordan: Thought Curvature Partial paradox reduction gibberish and missing citations.
PDF
Partial paradox reduction
Paradoxical strings have been perturbed to reduce in factor variant/invariant manifold interaction paradigms (Bengio et al, Kihyuk et al), that effectively learn to disentangle varying factors.
 
ProgrammingGodJordan: A lie about what I wrote in a post

A crazily formatted post leads to:
18 August 2017 ProgrammingGodJordan: A lie about what I wrote in a post.
I did not write 'any point in a supermanifold...is never euclidean' in my 29th March 2017
Repeating ignorance about supermanifolds does not change that they are not locally Euclidean as everyone reads that Wikipedia article you cited understands.
Locally means a small region.
For others:
A point in a supermanifold has non-Euclidean components and so cannot be Euclidean.
Roger Penrose has a few pages on supermanifolds in 'The Road To Reality' and (N.B. from memory) gives the simplest example: Real numbers R with an anti-commuting generator ε "where εε = - εε whence ε2 = 0". For every a and b in R there is a corresponding a + εb. I visualize this as extending R into a very weird plane.

18 August 2017 ProgrammingGodJordan: A fantasy that I did not know deep learning models could include or exclude pooling layers.
15 August 2017 ProgrammingGodJordan: Ignorant nonsense about Deepmind
DeepMind is a "neural network that learns how to play video games in a fashion similar to that of humans". It can play several Atari games. It does not have an architecture related to those Atari games. What DeepMind does have is "a convolutional neural network, with a novel form of Q-learning". I have found 1 Google DeepMind paper about the neural network architecture that explicitly includes pooling layers but not as an implemented architecture element, Exploiting Cyclic Symmetry in Convolutional Neural Networks.
I already knew about their use in convolutional neural networks so I went looking for their possible use for DeepMind.

18 August 2017 ProgrammingGodJordan: Repeated "atari q" gibberish when DeepMind is not an Atari machine and has no "q" (does have Q-learning)

18 August 2017 ProgrammingGodJordan: "Supermanifold may encode as "essentially flat euclidean super space"" obsession again.
I translate that as ignorance about supermanifolds. It is a lie I translate that ignorance to "Supermanifolds are euclidean" because you know that I know supermanifolds are not Euclidean.
 
Last edited:
Supermathematics and Artificial General Intelligence / Thought Curvature

[imgw=350]http://i.imgur.com/1qOIvRh.gif[/imgw]


Intriguingly, both the Google Deepmind paper, "Early Visual Concept Learning" (September 2016) and the paper of mine, entitled "Thought curvature" (May 2016):

(1) Consider combining somethings in machine learning called translation invariant, and translation variant paradigms (i.e. disentangling factors of variation)

(2) Do (1) particularly in the regime of reinforcement learning, causal laws of physics, and manifolds.


FOOTNOTE:
Notably, beyond the Deepmind paper, thought curvature describes the (machine learning related) algebra of Supermanifolds, instead of mere manifolds.


QUESTION:
Given particular streams of evidence..., is a degree of the super-manifold structure a viable path in the direction of mankind's likely last invention, Artificial General Intelligence?


Edited by Agatha: 
Edited as the 'thought curvature' link is dead. Please go to this link: https://www.researchgate.net/publication/316586028_Thought_Curvature_An_underivative_hypothesis











Signature:
 
Last edited by a moderator:
See thought curvature paper.
I replied to your post where you defined eta with an link to an irrelevant Wikipedia article with a definition used in fluid mechanics.
The Kolmogorov scale eta is a parameter of a fluid:
where ν is the kinematic viscosity and ε is the rate of kinetic energy dissipation

But since you brought it up. The original post of gibberish:
"Simply", it consists of manifolds as models for concept representation, in conjunction with policy π - a temporal difference learning paradigm representing distributions over eta.
has lead to
1 September 2017 ProgrammingGodJordan: A lie about "distributions over eta" being in his thought curvature PDF.
There is no eta at all in the current PDF!
 
Last edited:
I replied to your post where you defined eta with an link to an irrelevant Wikipedia article with a definition used in fluid mechanics.
The Kolmogorov scale eta is a parameter of a fluid:


But since you brought it up. The original post of gibberish:

has lead to
1 September 2017 ProgrammingGodJordan: A lie about "distributions over eta" being in his thought curvature PDF.
There is no eta at all in the current PDF!

Eta (η) simply refers to input space on which thought curvature structure may absorb/evaluate.





Signature:
 
A crazily formatted post leads to:
18 August 2017 ProgrammingGodJordan: A lie about what I wrote in a post.
I did not write 'any point in a supermanifold...is never euclidean' in my 29th March 2017

Locally means a small region.
For others:
A point in a supermanifold has non-Euclidean components and so cannot be Euclidean.
Roger Penrose has a few pages on supermanifolds in 'The Road To Reality' and (N.B. from memory) gives the simplest example: Real numbers R with an anti-commuting generator ε "where εε = - εε whence ε2 = 0". For every a and b in R there is a corresponding a + εb. I visualize this as extending R into a very weird plane.

18 August 2017 ProgrammingGodJordan: A fantasy that I did not know deep learning models could include or exclude pooling layers.
15 August 2017 ProgrammingGodJordan: Ignorant nonsense about Deepmind

I already knew about their use in convolutional neural networks so I went looking for their possible use for DeepMind.

18 August 2017 ProgrammingGodJordan: Repeated "atari q" gibberish when DeepMind is not an Atari machine and has no "q" (does have Q-learning)

18 August 2017 ProgrammingGodJordan: "Supermanifold may encode as "essentially flat euclidean super space"" obsession again.
I translate that as ignorance about supermanifolds. It is a lie I translate that ignorance to "Supermanifolds are euclidean" because you know that I know supermanifolds are not Euclidean.


Alright, you have demonstrated that you lack basic machine learning knowledge.


PART A
You had unavoidably mentioned that "the set of points in the neighborhood of any point in a supermanifold is never Euclidean."


PART B
My prior expression "Deepmind's Atari Q architecture", no where mentioned that Deepmind (A machine learning company) was an "atari machine".

Here are other typical presentation, constituting Deepmind's atari q architecture:

(1) https://github.com/kuz/DeepMind-Atari-Deep-Q-Learner

(2) http://ikuz.eu/2015/02/27/google-deepmind-publishes-atari-q-learner-source-code/


PART CYou had long demonstrated that you lacked basic knowledge in machine learning.

WHY?
You had demonstrated that you hadn't known that deep learning models, could include or exclude pooling layers.

RECALL:

[imgw=150]http://i.imgur.com/JYrZOW4.jpg[/imgw]


Into the introduction and:
15 August 2017 ProgrammingGodJordan: Ignorant nonsense about Deepmind.

You are demonstrably wrong, as you will see below.



ProgrammingGodJordan said:
Deepmind’s atari q architecture encompasses non-pooling convolutions
DeepMind is a "neural network that learns how to play video games in a fashion similar to that of humans". It can play several Atari games. It does not have an architecture related to those Atari games. What DeepMind does have is "a convolutional neural network, with a novel form of Q-learning".


What is the relevance of your line above?

Here is a more detailed, intuitive, mathematical description of mine, regarding deepmind's flavour of deep q learning (written in 2016):

https://www.quora.com/Artificial-In...p-Q-networks-DQN-work/answer/Jordan-Bennett-9




I have found 1 Google DeepMind paper about the neural network architecture that explicitly includes pooling layers but not as an implemented architecture element, Exploiting Cyclic Symmetry in Convolutional Neural Networks.

What is missing in the PDF is any references for DeepMind.

(1)
My thought curvature paper is unavoidably valid, in expressing that deepmind did not use pooling layers in AtariQ model. (See (2) below).




(2)
Don't you know any machine learning?

Don't you know that convolutional layers can be in a model, without pooling layers?
PaUaBx9.png


WHY NO POOLING LAYERS (FOR THIS PARTICULAR SCENARIO)?

In particular, for eg, pooling layers enable translation in-variance, such that object detection can occur, regardless of position in an image. This is why deepmind left it out; the model is quite sensitive to changes in embedding/entities' positions per frame, so the model can reinforce itself by Q-updating.


SOME RESOURCES TO HELP TO PURGE YOUR IGNORANCE:

(a) Deepmind's paper.

(b) If (a) is too abstruse, see this breakdown, why atari q left out pooling layers. (A clear, similar explanation similar to the 'WHY NO POOLING LAYERS (FOR THIS PARTICULAR SCENARIO)?' section above, or as is long written in thought curvature paper)




FOOTNOTE:
It is no surprise that deepmind used pooling in another framework. Pooling layers are used in deep learning all the time, and convolutions can either include, or exclude pooling. (Deep learning basics)






Signature:
 


From prior threads, you had long demonstrated that you lack basic machine learning knowledge.

For example, you had demonstrated that you hadn't known that deep learning models, could include or exclude pooling layers.
A reminder:

[imgw=150]http://i.imgur.com/JYrZOW4.jpg[/imgw]


Into the introduction and:
15 August 2017 ProgrammingGodJordan: Ignorant nonsense about Deepmind.

You are demonstrably wrong, as you will see below.



ProgrammingGodJordan said:
Deepmind’s atari q architecture encompasses non-pooling convolutions
DeepMind is a "neural network that learns how to play video games in a fashion similar to that of humans". It can play several Atari games. It does not have an architecture related to those Atari games. What DeepMind does have is "a convolutional neural network, with a novel form of Q-learning".


What is the relevance of your line above?

Here is a more detailed, intuitive, mathematical description of mine, regarding deepmind's flavour of deep q learning (written in 2016):

https://www.quora.com/Artificial-In...p-Q-networks-DQN-work/answer/Jordan-Bennett-9




I have found 1 Google DeepMind paper about the neural network architecture that explicitly includes pooling layers but not as an implemented architecture element, Exploiting Cyclic Symmetry in Convolutional Neural Networks.

What is missing in the PDF is any references for DeepMind.

(1)
My thought curvature paper is unavoidably valid, in expressing that deepmind did not use pooling layers in AtariQ model. (See (2) below).




(2)
Don't you know any machine learning?

Don't you know that convolutional layers can be in a model, without pooling layers?
PaUaBx9.png


WHY NO POOLING LAYERS (FOR THIS PARTICULAR SCENARIO)?

In particular, for eg, pooling layers enable translation in-variance, such that object detection can occur, regardless of position in an image. This is why deepmind left it out; the model is quite sensitive to changes in embedding/entities' positions per frame, so the model can reinforce itself by Q-updating.


SOME RESOURCES TO HELP TO PURGE YOUR IGNORANCE:

(a) Deepmind's paper.

(b) If (a) is too abstruse, see this breakdown, why atari q left out pooling layers. (A clear, similar explanation similar to the 'WHY NO POOLING LAYERS (FOR THIS PARTICULAR SCENARIO)?' section above, or as is long written in thought curvature paper)




FOOTNOTE:
It is no surprise that deepmind used pooling in another framework. Pooling layers are used in deep learning all the time, and convolutions can either include, or exclude pooling. (Deep learning basics)


FOOTNOTE:

Of course, even if one lacks official machine learning training (as you clearly demonstrate above), depending on one's field/area of research, one may still contribute.
However this is not the case for you, all your claims of missing citations are invalid, as is demonstrated in the source.



Signature:
 
Last edited:
My area of research is computer science, particularly in Artificial Intelligence.

I am not trained in machine learning, university-wise, but I do research anyway.

Then if you have these questions and it's an area you either understand, or are thoroughly motivated to understand, why not answer them by experiment?

If you understand the topic well enough to do the experiments, then do them. If you don't, then you are arguing about a topic from a point of ignorance. Your time would be far better served by learning the topic sufficiently to answer the questions.
 
Then if you have these questions and it's an area you either understand, or are thoroughly motivated to understand, why not answer them by experiment?

If you understand the topic well enough to do the experiments, then do them. If you don't, then you are arguing about a topic from a point of ignorance. Your time would be far better served by learning the topic sufficiently to answer the questions.

There are particular limits, that I currently aim to resolve:

(1) I don't have access to google-level gpus for the purpose of rapid experimentation.

(2) I don't have the depth of knowledge that a phd pioneer like Yoshua Bengio would possess, especially, given the nature of my university's sub-optimal AI course.



FOOTNOTE:
(i) Despite (2), it is not inconceivable that I can detect regimes, that phd aligned machine learning people may miss.

For example (unlike state of the art related works), I consider machine learning algebra, as it relates to cognitive science. Bengio's works, especially concerning manifolds, do not yet? entirely compound cognitive science, as cognitive science entails supersymmetry/supermanifolds, which Bengio's work does not entail.

(ii) Likewise state of the art work, such as deepmind's works on manifolds do not yet? entail cognitive science, in entirety, although deepmind tends to consider boundaries amidst cognitive science.


(iii) Regardless of (2) though, I have communicated with Bengio, in order to compose the thought curvature paper.

As such, although thought curvature does not yet compound encodings that are experimentally observable, it does express valid machine learning aligned algebra, especially on the horizon of empirical evidence, on which future work may occur.
EXAMPLES OF COMMUNICATIONS WITH BENGIO:


x3RM20F.png








Signature:
 
Last edited:
Yes, I do science, and science is true.

So, it can be said that my area of research, like that of many scientists, is "truth", for science is true.





Signature:



I recently posted in another of your bovine excrement threads where you redefined "God" into a useless term, that "Science" had been redefined as "Medieval European Alchemy." I'm now expanding that definition to ALL threads in which you and I are participants.

Adjust your Dunning/Kruger discussion of AI and machine learning accordingly.

How do the Philosopher's Stone and the transmutation of metals fit into your model?
 
I recently posted in another of your bovine excrement threads where you redefined "God" into a useless term, that "Science" had been redefined as "Medieval European Alchemy." I'm now expanding that definition to ALL threads in which you and I are participants.

Adjust your Dunning/Kruger discussion of AI and machine learning accordingly.

How do the Philosopher's Stone and the transmutation of metals fit into your model?

I don't detect any sensible data amidst your response.



FOOTNOTE:
Curiously, how does Dunning/Kruger supposedly apply to a being (i.e. myself), who aims to acquire a lot more scientific data?





Signature:
 
Last edited:
No you don't. You have no understanding of what that word means.

Have you anything valid to express, beyond non-evidenced blather?


FOOTNOTE:
I am off to slumber, so I shan't yet have the opportunity to observe a valid response that you may later (or at all?) write here.




Signature:
 
Last edited:
I don't detect any sensible data amidst your response.


Now you know how we feel reading your posts.


FOOTNOTE:
Curiously, how does Dunning/Kruger supposedly apply to a being (i.e. myself), who aims to acquire a lot more scientific data?


You have no idea what you're writing about. You don't understand any of the concepts you're onanizing on. You cover up your complete lack of comprehension with arrogance and poor writing but nobody is fooled.

You are accumulating data but not understanding any of it. You are comparable to an illiterate man with a massive library, bragging about how educated he is because of the massive library he cannot read.

You should be seeking understanding, not accumulating more buzzwords to throw into your word salads.
 
Now you know how we feel reading your posts.





You have no idea what you're writing about. You don't understand any of the concepts you're onanizing on. You cover up your complete lack of comprehension with arrogance and poor writing but nobody is fooled.

You are accumulating data but not understanding any of it. You are comparable to an illiterate man with a massive library, bragging about how educated he is because of the massive library he cannot read.

You should be seeking understanding, not accumulating more buzzwords to throw into your word salads.

Instead of blathering on absent evidence, it is pertinent that you perhaps demonstrate how I supposedly fail to present valid data.





Signature:
 

Back
Top Bottom