ISF Logo   IS Forum
Forum Index Register Members List Events Mark Forums Read Help

Go Back   International Skeptics Forum » General Topics » Science, Mathematics, Medicine, and Technology
 


Welcome to the International Skeptics Forum, where we discuss skepticism, critical thinking, the paranormal and science in a friendly but lively way. You are currently viewing the forum as a guest, which means you are missing out on discussing matters that are of interest to you. Please consider registering so you can gain full use of the forum features and interact with other Members. Registration is simple, fast and free! Click here to register today.
Tags artificial intelligence , consciousness , Edward Witten , Max Tegmark

Reply
Old 5th October 2017, 12:44 PM   #361
halleyscomet
Philosopher
 
halleyscomet's Avatar
 
Join Date: Dec 2012
Posts: 7,648
Originally Posted by ProgrammingGodJordan View Post
(1) I still lack sufficient GPU resources, in order to do particular tests w.r.t. to certain parts involving thought curvature.

(2) If you read the paper you would find that the paper is based on something called the quantum boltzmann machine, and quantum reinforcement learning.

So the outcome is that I lack yes, both GPU resources, and Quantum Computing resources.

Thanks for trying to help, but you ended up attacking my thread without evidence, like many others here have done.

If you're going to attack, attack with evidence please, and actually take more than 5 minutes to read thought curvature.
I'm not quite sure why you considered that an "attack." If anything it was providing you tools to get around your current hardware limitations or to at least quantify the hardware you WOULD need to get the job done. Knowing what resources you need is a big part of completing a project.

To that end, I suggest you look into some of the grid computing technologies available now:

https://golem.network

http://www.gridcoin.us

Neural Network modeling should lend itself nicely to distributed commuting. True, it won't be as zippy as if you had your own bank of Bitcoin mining machines re-purposed to your needs, but its far more productive than sitting around complaining about your lack of hardware.

Even better, accessing an existing grid commuting architecture will be a lot CHEAPER, making it far easier to get funding or even run a GoFundMe campaign to get the resources needed to put you ideas to the test.

If you take advantage of grid computing you can start a domino effect in your research, proving, or disproving a lot of your hypotheses.


Last edited by halleyscomet; 5th October 2017 at 01:06 PM.
halleyscomet is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 5th October 2017, 01:57 PM   #362
RussDill
Philosopher
 
Join Date: Oct 2003
Location: Charleston
Posts: 5,424
Originally Posted by ProgrammingGodJordan View Post


(1) I still lack sufficient GPU resources, in order to do particular tests w.r.t. to certain parts involving thought curvature.

(2) If you read the paper you would find that the paper is based on something called the quantum boltzmann machine, and quantum reinforcement learning.

So the outcome is that I lack yes, both GPU resources, and Quantum Computing resources.
From the QBM paper: "We show examples of QBM training with and without the bound, using exact diagonalization, and compare the results with classical Boltzmann training." So they actually did the research

And: "We also discuss the possibility of using quantum annealing processors like D-Wave for QBM training and application." They did not use a quantum computer to do the research.

The authors of the paper did not make excuses and post endlessly about being "attacked". Instead they did the research. You have access to GPU cloud compute resources. You have access to quantum computation models. Why are you here instead of doing the research?
__________________
The woods are lovely, dark and deep
but i have promises to keep
and lines to code before I sleep
And lines to code before I sleep
RussDill is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 5th October 2017, 03:04 PM   #363
Reality Check
Penultimate Amazing
 
Join Date: Mar 2008
Location: New Zealand
Posts: 20,727
Originally Posted by ProgrammingGodJordan View Post
..and...
Rumors are irrelevant to what you wrote. That was a section stating that it will propose an experiment and then not proposing an experiment. Thus:
5 October 2017: No experiment at all, proposed or actual at the given link or PDF!
5 October 2017: A PDF section title lies about a probable experiment no experiment at all, proposed or actual.

Plus:
6 October 2017: Usual insults about knowledge of machine learning.
Repeat of "Deepmnd atari q architecture" nonsense when the Arcade Learning Environment not built on Atari machines and has no "q" architecture! It would just be sloppy writing if it was not persistent.
5 October 2017: A link to a PDF repeating a delusion of a "Deepmnd atari q architecture".
15 August 2017: Ignorant nonsense about Deepmind
18 August 2017: Repeated "atari q" gibberish when DeepMind is not an Atari machine and has no "q" (does have Q-learning)

You accuse me of being ignorant about machine learning and then ask:
Originally Posted by ProgrammingGodJordan View Post
For example,...why did you then go on to discuss some paper that including pooling?
The answer is that I know about the use of pooling layers in machine learning and so researched whether DeepMind were looking at using pooling layers. I thought that you would be interested in learning more about the Google DeepMind company and so mentioned it on my post:
Originally Posted by Reality Check View Post
Into the introduction and:
15 August 2017 ProgrammingGodJordan: Ignorant nonsense about Deepmind.
Quote:
Deepmind’s atari q architecture encompasses non-pooling convolutions
DeepMind is a "neural network that learns how to play video games in a fashion similar to that of humans". It can play several Atari games. It does not have an architecture related to those Atari games. What DeepMind does have is "a convolutional neural network, with a novel form of Q-learning". I have found 1 Google DeepMind paper about the neural network architecture that explicitly includes pooling layers but not as an implemented architecture element, Exploiting Cyclic Symmetry in Convolutional Neural Networks.

What is missing in the PDF is any references for DeepMind.
The last point is why I had to go looking for DeepMind sources. You did not support the "non-pooling" assertion. You did not even link to the Wikipedia article but then that would have shown everyone that "Deepmnd atari q architecture" was nonsense !

Last edited by Reality Check; 5th October 2017 at 03:27 PM.
Reality Check is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 5th October 2017, 08:25 PM   #364
ProgrammingGodJordan
Banned
 
Join Date: Feb 2017
Location: Jamaica
Posts: 1,718
Originally Posted by halleyscomet View Post
I'm not quite sure why you considered that an "attack." If anything it was providing you tools to get around your current hardware limitations or to at least quantify the hardware you WOULD need to get the job done. Knowing what resources you need is a big part of completing a project.

To that end, I suggest you look into some of the grid computing technologies available now:

https://golem.network

http://www.gridcoin.us

Neural Network modeling should lend itself nicely to distributed commuting. True, it won't be as zippy as if you had your own bank of Bitcoin mining machines re-purposed to your needs, but its far more productive than sitting around complaining about your lack of hardware.

Even better, accessing an existing grid commuting architecture will be a lot CHEAPER, making it far easier to get funding or even run a GoFundMe campaign to get the resources needed to put you ideas to the test.

If you take advantage of grid computing you can start a domino effect in your research, proving, or disproving a lot of your hypotheses.

https://i.imgur.com/9F3FaB8.gif
Yeah, that is a given...
ProgrammingGodJordan is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 5th October 2017, 09:00 PM   #365
ProgrammingGodJordan
Banned
 
Join Date: Feb 2017
Location: Jamaica
Posts: 1,718
Originally Posted by Reality Check View Post
Snipped the irrelevant section about thought curvature experiment proposal

You accuse me of being ignorant about machine learning and then ask:

The answer is that I know about the use of pooling layers in machine learning and so researched whether DeepMind were looking at using pooling layers. I thought that you would be interested in learning more about the Google DeepMind company and so mentioned it on my post:
No, I was not interested in pooling w.r.t. to deep q learning.

I have done pooling elsewhere, as you can see here.


Originally Posted by RealityCheck
The last point is why I had to go looking for DeepMind sources. You did not support the "non-pooling" assertion. You did not even link to the Wikipedia article but then that would have shown everyone that "Deepmnd atari q architecture" was nonsense !


The highlighted portion above, is of course, invalid.

(1) This is why I constantly point out that your words appear to stem from somebody who is absent basic machine learning knowledge.

(2) Notably, deepmind's deep Q learning model did not use pooling, because in order to learn on the varying changes in the positions of objects in latent space, the model did not pool, or impose translation invariance during learning.

(3) You neglected to copy the rest of the paragraph which like (2) above, explained why no pooling was used:



(4) Even if you failed to understand the writing style in the thought curvature paper, if you really had the machine learning knowledge you claimed to have, you would likely have discovered deepmind's non-pooling phenomenon, by reading the reference material “Playing Atari with Deep Reinforcement
Learning" as cited in the thought curvature paper.

Last edited by ProgrammingGodJordan; 5th October 2017 at 09:05 PM.
ProgrammingGodJordan is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 5th October 2017, 09:29 PM   #366
ProgrammingGodJordan
Banned
 
Join Date: Feb 2017
Location: Jamaica
Posts: 1,718
Originally Posted by RussDill View Post
From the QBM paper: "We show examples of QBM training with and without the bound, using exact diagonalization, and compare the results with classical Boltzmann training." So they actually did the research

And: "We also discuss the possibility of using quantum annealing processors like D-Wave for QBM training and application." They did not use a quantum computer to do the research.


(1) The quantum boltzmann machine experiment was ran on a "2000 qubit system", with focus on some of the qubits. (See minute 22:57 on this video)

(2) I stand by reply 261, I still lack proper computational resources, to do particular experiments.

As an example, this simple residual neural network, for heart irregularity detection, (which I composed for a kaggle contest) destroyed my prior desktop nvdia card. I have a stronger system now [gtx 960, i7-6700, 2TB hdd, 32 gb ram], but I use this laptop for workplace stuff, and it can't manage any more large experiments, for now.

Originally Posted by RusDill
The authors of the paper did not make excuses and post endlessly about being "attacked". Instead they did the research. You have access to GPU cloud compute resources. You have access to quantum computation models.
I did research too.
(1) What do you think is taking place in this thought curvature snippet image?
(2) If you understand (1), you would see that research was done.
(3) I don't mind being attacked at all, but if one is to attack me in argument, it must be on the premise of sensible data/evidence, rather than not.

Originally Posted by RusDill
Why are you here instead of doing the research?
I am researching, but when I take breaks, I visit here, or elsewhere.

Last edited by ProgrammingGodJordan; 5th October 2017 at 09:41 PM.
ProgrammingGodJordan is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 6th October 2017, 08:11 AM   #367
RussDill
Philosopher
 
Join Date: Oct 2003
Location: Charleston
Posts: 5,424
Originally Posted by ProgrammingGodJordan View Post

(1) The quantum boltzmann machine experiment was ran on a "2000 qubit system", with focus on some of the qubits. (See minute 22:57 on this video)
That's wonderful that they did the extension of their work that they described in their paper. It still stands that their original work was done without the D-Wave machine. The point still clearly stands that they did the original work, and published the original paper, without the use of a quantum computer. Given my experience with debugging, I would guess that no one ever runs anything on a D-Wave without working it out mathematically and running it on a simulator first. It would just be an untractable task to try and debug it on a D-wave.

ETA: After watching the video further, it looks like they have run it in a D-Wave simulator, and not yet on actual hardware. "Finally, after a brief introduction to D-Wave quantum annealing processors, I will discuss the *possibility* of using such processors for QBM training and application." Oh, and they used 8 qubits (32 annealing qubits). You are currently fully capable of running 8 qubit simulations. Maybe I missed part of the video.

Quote:
(2) I stand by reply 261, I still lack proper computational resources, to do particular experiments.
You have access to the same computing resources the people who wrote the paper had access to. If you are lacking in something, it's clearly some other type of resource. Have you even attempted to repeat their results?

Quote:
it can't manage any more large experiments, for now.
Then stop wasting money on local hardware and use cloud compute resources. You'll be surprised at how affordable it is.


Quote:
I did research too.
Sorry, I forgot. I'm used to talking to computer science academics who have a very specific definition of research that is different from what everyone else means when they say research. I thought explaining it would be sufficient, but I guess I should just stop using that word because it tends to derail the conversation.

The people that wrote the paper did not make excuses about not having resources, they did the experiments to show the merits of their techniques. Why are you wasting time here instead of doing the experiments to show the merits of your techniques?
__________________
The woods are lovely, dark and deep
but i have promises to keep
and lines to code before I sleep
And lines to code before I sleep

Last edited by RussDill; 6th October 2017 at 08:31 AM.
RussDill is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 6th October 2017, 09:34 AM   #368
halleyscomet
Philosopher
 
halleyscomet's Avatar
 
Join Date: Dec 2012
Posts: 7,648
Originally Posted by RussDill View Post
You have access to GPU cloud compute resources. You have access to quantum computation models. Why are you here instead of doing the research?
Well, that would be work. Besides, if he put his ideas to the test he might be proven wrong. He can't be a victim of academic suppression if he does concrete research that proves or discredits his ideas in a repeatable way.
halleyscomet is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 6th October 2017, 10:38 AM   #369
ProgrammingGodJordan
Banned
 
Join Date: Feb 2017
Location: Jamaica
Posts: 1,718
Originally Posted by RussDill View Post
That's wonderful that they did the extension of their work that they described in their paper. It still stands that their original work was done without the D-Wave machine. The point still clearly stands that they did the original work, and published the original paper, without the use of a quantum computer. Given my experience with debugging, I would guess that no one ever runs anything on a D-Wave without working it out mathematically and running it on a simulator first. It would just be an untractable task to try and debug it on a D-wave.

ETA: After watching the video further, it looks like they have run it in a D-Wave simulator, and not yet on actual hardware. "Finally, after a brief introduction to D-Wave quantum annealing processors, I will discuss the *possibility* of using such processors for QBM training and application." Oh, and they used 8 qubits (32 annealing qubits). You are currently fully capable of running 8 qubit simulations. Maybe I missed part of the video.


You have access to the same computing resources the people who wrote the paper had access to. If you are lacking in something, it's clearly some other type of resource. Have you even attempted to repeat their results?

Then stop wasting money on local hardware and use cloud compute resources. You'll be surprised at how affordable it is.

Sorry, I forgot. I'm used to talking to computer science academics who have a very specific definition of research that is different from what everyone else means when they say research. I thought explaining it would be sufficient, but I guess I should just stop using that word because it tends to derail the conversation.

The people that wrote the paper did not make excuses about not having resources, they did the experiments to show the merits of their techniques. Why are you wasting time here instead of doing the experiments to show the merits of your techniques?

(1) Yes, at 8 qubits, requiring roughly 8.533 gb of ram, some simulations based on the quantum Boltzmann machine are quite doable on my 32 gb machine.

Of course, the 8 qubit usage in both the Quantum Boltzmann machine and Quantum Reinforcement Learning paper, were for small toy examples, that don't deal with the (Super-) Hamiltonian structure required by thought curvature.


(2) The (Super-) Hamiltonian structure required by thought curvature will require a quite scalable scheme, such as some boson sampling aligned regime in particular. The scheme above is approachable on the scale of 42 qubits, or a 44.8 gb ram configuration for simple tasks/circuits, and I lack access to configurations with 44.8 gb of ram.

Even if I could squeeze some testing on my 32 gb system, this would be dangerous, since this is my only system used to generate my salaries which I use to thrive.

This is dangerous because from experimentation (See quote about gpu destruction), I know that training machine learning algorithms places a large toll on hardware.


(3) The green portion in your quote above is irrelevant. I use my laptop for work purposes, and other freelancing stuff, which provides me with salaries to thrive and research.


(4) Small nitpicks:
(a) The part I stroke through in your quote above is redundant; I already mentioned that they were focusing on a portion/sum of a "2000 qubit system", and linked to a site which provided you with the particular sum.

(b) The red portion in your quote above is not true.

I simply don't have access to their level or resources.

A "2000 qubit" machine simulation corresponds to a 2133.33333333 gb ram configuration.

I have 32 gb system, and 32 < 2133.33333333.

Last edited by ProgrammingGodJordan; 6th October 2017 at 11:06 AM. Reason: Corrected tyoo "stoke" to "stroke"
ProgrammingGodJordan is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 6th October 2017, 11:25 AM   #370
halleyscomet
Philosopher
 
halleyscomet's Avatar
 
Join Date: Dec 2012
Posts: 7,648
Originally Posted by ProgrammingGodJordan View Post
(1) Yes, at 8 qubits, requiring roughly 8.533 gb of ram, some simulations based on the quantum Boltzmann machine are quite doable on my 32 gb machine.

Of course, the 8 qubit usage in both the Quantum Boltzmann machine and Quantum Reinforcement Learning paper, were for small toy examples, that don't deal with the (Super-) Hamiltonian structure required by thought curvature.


(2) The (Super-) Hamiltonian structure required by thought curvature will require a quite scalable scheme, such as some boson sampling aligned regime in particular. The scheme above is approachable on the scale of 42 qubits, or a 44.8 gb ram configuration for simple tasks/circuits, and I lack access to configurations with 44.8 gb of ram.

Even if I could squeeze some testing on my 32 gb system, this would be dangerous, since this is my only system used to generate my salaries which I use to thrive.

This is dangerous because from experimentation (See quote about gpu destruction), I know that training machine learning algorithms places a large toll on hardware.


(3) The green portion in your quote above is irrelevant. I use my laptop for work purposes, and other freelancing stuff, which provides me with salaries to thrive and research.


(4) Small nitpicks:
(a) The part I stroke through in your quote above is redundant; I already mentioned that they were focusing on a portion/sum of a "2000 qubit system", and linked to a site which provided you with the particular sum.

(b) The red portion in your quote above is not true.

I simply don't have access to their level or resources.

A "2000 qubit" machine simulation corresponds to a 2133.33333333 gb ram configuration.

I have 32 gb system, and 32 < 2133.33333333.
How much of that can you get from grid computing options?

What can you do to reduce the overall scope of the test to get a partial proof of concept, a digital pilot study if you will?

Why do you keep talking about the capabilities of your local machine when we're explicitly discussing grid computing options for your tests?
halleyscomet is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 6th October 2017, 12:10 PM   #371
RussDill
Philosopher
 
Join Date: Oct 2003
Location: Charleston
Posts: 5,424
Originally Posted by ProgrammingGodJordan View Post
A "2000 qubit" machine simulation corresponds to a 2133.33333333 gb ram configuration.
What part of "they used 8 qubits" did not parse for you? The D-wave has 2000 qubits in an annealing configuration, but an annealing machine with 32 qubits would have worked fine. You should be able to duplicate their work and start small scale tests of your own work, you have no excuses for not doing that.

If you aren't able to implement your design because it requires too many qubits, then don't use a quantum design. Use a classical learning model. The quantum design doesn't allow researchers to do anything that classical designs can't, there is just an enormous potential for speed up if the algorithms can be implemented on a quantum computer.

This would be like someone wanting to factor integers, but claiming that they can't because they don't have a quantum computer to run shor's on.

And you have another excuse, "I'm worried my hardware will blow up". I've run stuff hard, very hard, for more than a week at once, including custom hardware. If your GPU failed while being pushed, it would be due to defective hardware such as improperly installed cooling fan. It's not a legit concern and your hardware is under warranty anyway. Plus, if you are so worried about your hardware dying and therefore won't use it, why the hell are you spending money on hardware you won't use instead of cloud computer resources? Just so many endless excuses.
__________________
The woods are lovely, dark and deep
but i have promises to keep
and lines to code before I sleep
And lines to code before I sleep
RussDill is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 6th October 2017, 12:16 PM   #372
RussDill
Philosopher
 
Join Date: Oct 2003
Location: Charleston
Posts: 5,424
BTW, it's very clear that you don't understand the scaling properties of simulated qubits. They do not scale linearly, that's the point. For instance, the largest such simulation, a 45qubit simulation needs 500000GB of ram running on more than 8000 nodes.

If you were able to prove small scale tests, it's possible you could run your full scale 42 qubit design on such a machine at no cost to you.
__________________
The woods are lovely, dark and deep
but i have promises to keep
and lines to code before I sleep
And lines to code before I sleep
RussDill is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 6th October 2017, 01:18 PM   #373
ProgrammingGodJordan
Banned
 
Join Date: Feb 2017
Location: Jamaica
Posts: 1,718
Originally Posted by RussDill View Post
What part of "they used 8 qubits" did not parse for you? The D-wave has 2000 qubits in an annealing configuration, but an annealing machine with 32 qubits would have worked fine. You should be able to duplicate their work and start small scale tests of your own work, you have no excuses for not doing that.
Your above statement was redundant then, and it remains redundant now.

It was I that provided the data to you, that they used 2000 qubit system with focus on some of the qubits. That focus was the 8 qubits you were proud to report from the very source I linked you to.


Originally Posted by RussDill
If you aren't able to implement your design because it requires too many qubits, then don't use a quantum design. Use a classical learning model. The quantum design doesn't allow researchers to do anything that classical designs can't, there is just an enormous potential for speed up if the algorithms can be implemented on a quantum computer.

This would be like someone wanting to factor integers, but claiming that they can't because they don't have a quantum computer to run shor's on.

And you have another excuse, "I'm worried my hardware will blow up". I've run stuff hard, very hard, for more than a week at once, including custom hardware. If your GPU failed while being pushed, it would be due to defective hardware such as improperly installed cooling fan. It's not a legit concern and your hardware is under warranty anyway. Plus, if you are so worried about your hardware dying and therefore won't use it, why the hell are you spending money on hardware you won't use instead of cloud computer resources? Just so many endless excuses.
Please pay attention to the quote below, quite carefully:

Originally Posted by ProgrammingGodJordan
(2) The (Super-) Hamiltonian structure required by thought curvature will require a quite scalable scheme, such as some boson sampling aligned regime in particular. The scheme above is approachable on the scale of 42 qubits, or a 44.8 gb ram configuration for simple tasks/circuits, and I lack access to configurations with 44.8 gb of ram.

Even if I could squeeze some testing on my 32 gb system, this would be dangerous, since this is my only system used to generate my salaries which I use to thrive.

This is dangerous because from experimentation (See quote about gpu destruction), I know that training machine learning algorithms places a large toll on hardware.
You don't seem to get that I don't want to run Hamiltonian simulations as ran in the Quantum Boltzmann/Reinforcement experiments,
I want to run (Super-) Hamiltonian experiments on the horizon of this source,
instead.

As you can see above, this essentially means my system fails to cover the 44.x gb ram configuration specified by toy examples of the boson like sampling methods as specified above.
ProgrammingGodJordan is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 6th October 2017, 01:23 PM   #374
ProgrammingGodJordan
Banned
 
Join Date: Feb 2017
Location: Jamaica
Posts: 1,718
Originally Posted by RussDill View Post
BTW, it's very clear that you don't understand the scaling properties of simulated qubits. They do not scale linearly, that's the point. For instance, the largest such simulation, a 45qubit simulation needs 500000GB of ram running on more than 8000 nodes.

If you were able to prove small scale tests, it's possible you could run your full scale 42 qubit design on such a machine at no cost to you.
Your comment above stemmed from the prior invalid comment you made, regarding the actual space/time complexity required to perform these computations, as I specified in item 2 above.

Footnote:
I can't say I am knowledgeless when it comes to the exponential nature of quantum computation.

See this concise mathematical description of quantum computation, of mine: https://www.researchgate.net/publica...ntum_computing

Last edited by ProgrammingGodJordan; 6th October 2017 at 01:25 PM.
ProgrammingGodJordan is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 6th October 2017, 01:26 PM   #375
halleyscomet
Philosopher
 
halleyscomet's Avatar
 
Join Date: Dec 2012
Posts: 7,648
Originally Posted by ProgrammingGodJordan View Post
Your above statement was redundant then, and it remains redundant now.

It was I that provided the data to you, that they used 2000 qubit system with focus on some of the qubits. That focus was the 8 qubits you were proud to report from the very source I linked you to.




Please pay attention to the quote below, quite carefully:



You don't seem to get that I don't want to run Hamiltonian simulations as ran in the Quantum Boltzmann/Reinforcement experiments,
I want to run (Super-) Hamiltonian experiments on the horizon of this source,
instead.

As you can see above, this essentially means my system fails to cover the 44.x gb ram configuration specified by toy examples of the boson like sampling methods as specified above.
What's your game plan?

How do you plan to approach getting answers to the questions your theories pose? So far I see a LOT of excuses, but no plans.

How are you going to get from point A to point B?
halleyscomet is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 6th October 2017, 01:36 PM   #376
ProgrammingGodJordan
Banned
 
Join Date: Feb 2017
Location: Jamaica
Posts: 1,718
Originally Posted by halleyscomet View Post
What's your game plan?

How do you plan to approach getting answers to the questions your theories pose? So far I see a LOT of excuses, but no plans.

How are you going to get from point A to point B?
I don't know what you meant by excuses, but these are currently unavoidable physical limitations brought by lack of funds/hardware.

As for how such hardware shall be acquired, I have been first working to complete certain pre-requisites in code, before requesting funding externally.
ProgrammingGodJordan is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 6th October 2017, 01:45 PM   #377
halleyscomet
Philosopher
 
halleyscomet's Avatar
 
Join Date: Dec 2012
Posts: 7,648
Originally Posted by ProgrammingGodJordan View Post
I don't know what you meant by excuses, but these are currently unavoidable physical limitations brought by lack of funds/hardware.

As for how such hardware shall be acquired, I have been first working to complete certain pre-requisites in code, before requesting funding externally.
Have you even researched the use of grid commuting? The kind of resources SETI uses for distributed analysis of signal data are available to the masses thanks to various grid commuting technologies.

What about smaller proof-of-concept tests that are within your reach, either on your own hardware or through grid computing technologies? Is there some smaller, more readily tested sub-set of your ideas that can be tested and used in a grant proposal to get funding for more ambitious tests?

You seem paralyzed by an all-or-nothing mentality, refusing to take partial measures if the complete solution isn't readily available. Your refusal to break the problem down into smaller units is functionally equivalent to conceding defeat. There would be a considerable degree of pathos in someone else coming along, stealing your ideas and publishing them as their own having done some of the smaller scale tests you appear to be refusing to even consider.
halleyscomet is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 6th October 2017, 01:56 PM   #378
RussDill
Philosopher
 
Join Date: Oct 2003
Location: Charleston
Posts: 5,424
Originally Posted by ProgrammingGodJordan View Post
B]
I can't say I am knowledgeless when it comes to the exponential nature of quantum computation.
Thinking that quantum computation scales linearly with the number of qubits indicates not just a lack of knowledge in the field, but seems to indicate a complete lack of awareness of the entire point of quantum computation.

ETA: I encourage anyone to read that "paper" (don't worry, it's just a snippet). I really like how for some bizarre reason qubit got translated to "spooky-bit".
__________________
The woods are lovely, dark and deep
but i have promises to keep
and lines to code before I sleep
And lines to code before I sleep

Last edited by RussDill; 6th October 2017 at 02:14 PM.
RussDill is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 6th October 2017, 02:07 PM   #379
RussDill
Philosopher
 
Join Date: Oct 2003
Location: Charleston
Posts: 5,424
Originally Posted by ProgrammingGodJordan View Post
Your above statement was redundant then, and it remains redundant now.

It was I that provided the data to you, that they used 2000 qubit system with focus [/u][/b]on some of the qubits. That focus was the 8 qubits you were proud to report from the very source I linked you to.

Listen to the first question after the talk. They only used a portion of the qubits, a small scale test.
__________________
The woods are lovely, dark and deep
but i have promises to keep
and lines to code before I sleep
And lines to code before I sleep
RussDill is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 6th October 2017, 02:36 PM   #380
ProgrammingGodJordan
Banned
 
Join Date: Feb 2017
Location: Jamaica
Posts: 1,718
Originally Posted by RussDill View Post
Thinking that quantum computation scales linearly with the number of qubits indicates not just a lack of knowledge in the field, but seems to indicate a complete lack of awareness of the entire point of quantum computation.

ETA: I encourage anyone to read that "paper" (don't worry, it's just a snippet). I really like how for some bizarre reason qubit got translated to "spooky-bit".
Please try to calm down and read my prior quotes.

No where had I expressed any such linear scaling.

Ironically, in the url I linked to you with my mathematical description, I clearly describe an "exponential order" process:

ProgrammingGodJordan is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 6th October 2017, 02:44 PM   #381
ProgrammingGodJordan
Banned
 
Join Date: Feb 2017
Location: Jamaica
Posts: 1,718
Originally Posted by RussDill View Post
Originally Posted by ProgrammingGodJordan
It was I that provided the data to you, that they used 2000 qubit system with focus on some of the qubits. That focus was the 8 qubits you were proud to report from the very source I linked you to.
Listen to the first question after the talk. They only used a portion of the qubits, a small scale test.


I don't know if you're trolling, but I had long stated that they focused on some of the 2000 qubits, and the portion that was focused on is equal to the 8 qubits.

This means I am not disagreeing with the instance they used 8 qubit, as I had long stated that they were using a portion.

The very first quote I revealed to you about the video, showed that they used an 8 qubit system, quite clearly/obviously. Why bother to repeat the same thing to me?

Last edited by ProgrammingGodJordan; 6th October 2017 at 02:45 PM.
ProgrammingGodJordan is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 6th October 2017, 03:41 PM   #382
RussDill
Philosopher
 
Join Date: Oct 2003
Location: Charleston
Posts: 5,424
"Yes, at 8 qubits, requiring roughly 8.533 gb of ram" (~1GB per qubit)

"A "2000 qubit" machine simulation corresponds to a 2133.33333333 gb ram configuration." (~1GB per qubit)

Then why did you scale this in a pure linear fashion?
__________________
The woods are lovely, dark and deep
but i have promises to keep
and lines to code before I sleep
And lines to code before I sleep
RussDill is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 6th October 2017, 04:36 PM   #383
ProgrammingGodJordan
Banned
 
Join Date: Feb 2017
Location: Jamaica
Posts: 1,718
Originally Posted by RussDill View Post
"Yes, at 8 qubits, requiring roughly 8.533 gb of ram" (~1GB per qubit)

"A "2000 qubit" machine simulation corresponds to a 2133.33333333 gb ram configuration." (~1GB per qubit)

Then why did you scale this in a pure linear fashion?
Thanks for pointing out that large error.

Unlike the time I wrote the exponential order paper, I am a bit ill as I revealed earlier on page 6, and so things are somewhat blurry for me now.

This puts the old 44 gb ram configuration at 42 qubit = gb = 131,072 gb ram configuration, instead.

Last edited by ProgrammingGodJordan; 6th October 2017 at 04:59 PM.
ProgrammingGodJordan is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 10th October 2017, 06:25 AM   #384
halleyscomet
Philosopher
 
halleyscomet's Avatar
 
Join Date: Dec 2012
Posts: 7,648
Study measuring IQ of various AI puts Google's at 47.28

Quote:
Google's AI scored more than twice as high as Apple's Siri in a comparative analysis designed to assess AI threat.

Last edited by halleyscomet; 10th October 2017 at 06:30 AM.
halleyscomet is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 10th October 2017, 06:29 AM   #385
halleyscomet
Philosopher
 
halleyscomet's Avatar
 
Join Date: Dec 2012
Posts: 7,648
Distinguished scientist on the mistakes pundits make when they predict the future of AI

Quote:
Rodney Brooks -- eminent computer scientist and roboticist who has served as head of MIT's Computer Science and Artificial Intelligence Laboratory and CTO of Irobot -- has written a scorching, provocative list of the seven most common errors made (or cards palmed) by pundits and other fortune-tellers when they predict the future of AI.

His first insight is that AI is subject to the Gartner Hype Cycle (AKA Amara's Law: "We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run"), which means that a lot of what AI is supposed to be doing in the next couple years (like taking over half of all jobs in 10-20 years) is totally overblown, while the long-term consequences will likely be so profound that the effects on labor markets will be small potatoes.

–– ADVERTISEMENT ––



Next is the unexplained leap from today's specialized, "weak" AIs that do things like recognize faces, to "strong" general AI that can handle the kind of cognitive work that humans are very good at and machines still totally suck at. It's not impossible that we'll make that leap, but anyone predicting it who can't explain where it will come from is just making stuff up.
halleyscomet is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 10th October 2017, 11:14 AM   #386
ProgrammingGodJordan
Banned
 
Join Date: Feb 2017
Location: Jamaica
Posts: 1,718
Originally Posted by halleyscomet View Post
What was the point of the above, especially when the following is occurring?:



The more time passes, the more general smart algorithms are getting, and the more cognitive tasks they are doing:

Deep Learning AI Better Than Your Doctor at Finding Cancer:
https://singularityhub.com/2015/11/1...inding-cancer/


Self-taught artificial intelligence beats doctors at predicting heart attacks:
http://www.sciencemag.org/news/2017/...-heart-attacks


Here are a sequence cognitive fields/tasks, where sophisticated artificial neural models exceed human-kind:

1) Language translation (eg: Skype 50+ languages)
2) Legal-conflict-resolution (eg: 'Watson')
3) Self-driving (eg: 'otto-Self Driving' )
5) Disease diagnosis (eg: 'Watson')
6) Medicinal drug prescription (eg: 'Watson')
7) Visual Product Sorting (eg: 'Amazon Corrigon' )
8) Help Desk Assistance ('eg: Digital Genius)
9) Mechanical Cucumber Sorting (eg: 'Makoto's Cucumber Sorter')
10) Financial Analysis (eg: 'SigFig')
11) E-Discovery Law (eg: ' Social Science Research Network.')
12) Anesthesiology (eg: 'SedaSys')
13) Music composition (eg: 'Emily')
14) Go (eg: 'Alpha Go')
n) etc, etc




Can we build AI without losing control over it:
https://www.youtube.com/watch?v=8nt3...youtu.be&t=613

The Rise of the Machines – Why Automation is Different this Time:
https://www.youtube.com/watch?v=WSKi8HfcxEk

Will artificial intelligence take your job?:
https://www.youtube.com/watch?v=P_-wn8ghcoY

Humans need not apply:
https://www.youtube.com/watch?v=7Pq-S557XQU

The wonderful and terrifying implications of computers that can learn:
https://www.youtube.com/watch?v=t4kyRyKyOpo

Last edited by ProgrammingGodJordan; 10th October 2017 at 11:25 AM.
ProgrammingGodJordan is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 10th October 2017, 11:23 AM   #387
ProgrammingGodJordan
Banned
 
Join Date: Feb 2017
Location: Jamaica
Posts: 1,718
Originally Posted by halleyscomet View Post
Yes, people tend to overestimate and underestimate.

We should also recall that artifical general intelligence is already here to some degree.

Deepmind's learning algorithms is arguably strongest AI on the planet, as their AIs are the first approximations of artificial general intelligence.

Here is Demis Hassabis discussing the general algorithms that Deepmind has already made, and are already improving:

https://youtu.be/t03xNZ9qY1A?t=164

And here is a little passage for those who might not understand the importance of games (that Deepmind deals with) in machine learning:
https://medium.com/@jordanmicahbenne...m-55843c8ebcb9

Last edited by ProgrammingGodJordan; 10th October 2017 at 11:52 AM.
ProgrammingGodJordan is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 10th October 2017, 11:24 AM   #388
Myriad
Hyperthetical
 
Myriad's Avatar
 
Join Date: Nov 2006
Location: Pennsylvania
Posts: 13,178
Originally Posted by ProgrammingGodJordan View Post
Can we build AI without loosing control over it:

That depends. If we humans can't learn the difference between "losing" and "loosing," eventually an AI will figure it out, and then take over the world.
__________________
A zÝmbie once bit my sister...
Myriad is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 10th October 2017, 11:25 AM   #389
ProgrammingGodJordan
Banned
 
Join Date: Feb 2017
Location: Jamaica
Posts: 1,718
Originally Posted by Myriad View Post
That depends. If we humans can't learn the difference between "losing" and "loosing," eventually an AI will figure it out, and then take over the world.
Corrected.
ProgrammingGodJordan is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 10th October 2017, 11:40 AM   #390
halleyscomet
Philosopher
 
halleyscomet's Avatar
 
Join Date: Dec 2012
Posts: 7,648
Originally Posted by ProgrammingGodJordan View Post
What was the point of the above, especially when the following is occurring?:
It shows how an AI doesn't need much intelligence if you target it right. Fine tuning for the task at hand is going to result in far more productive AI for the time being than trying to achieve general purpose cognitive leaps. The medical AI you mentioned is fairly stupid in a general-purpose sense, but still out-performs the general-purpose intelligence of the doctors.
halleyscomet is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 10th October 2017, 11:48 AM   #391
ProgrammingGodJordan
Banned
 
Join Date: Feb 2017
Location: Jamaica
Posts: 1,718
Originally Posted by halleyscomet View Post
It shows how an AI doesn't need much intelligence if you target it right. Fine tuning for the task at hand is going to result in far more productive AI for the time being than trying to achieve general purpose cognitive leaps. The medical AI you mentioned is fairly stupid in a general-purpose sense, but still out-performs the general-purpose intelligence of the doctors.
Well, of course, AI doesn't need much intelligence to do narrow tasks.

However, we see that with human level intelligence, a general intelligence, we get general cognitive task performance.

It then makes sense that we attempt to mirror human level intelligence, at least in a way that we model general artificial models.

This is where my quote below comes in:

Originally Posted by ProgrammingGodJordan View Post
We should also recall that artificial general intelligence is already here to some degree.

Deepmind's learning algorithms is arguably strongest AI on the planet, as their AIs are the first approximations of artificial general intelligence.

Here is Demis Hassabis discussing the general algorithms that Deepmind has already made, and are already improving:

https://youtu.be/t03xNZ9qY1A?t=164

And here is a little passage for those who might not understand the importance of games (that Deepmind deals with) in machine learning:
https://medium.com/@jordanmicahbenne...m-55843c8ebcb9
This is why the planet's smartest AI people are attempting to make general artificial intelligence, and probably why Google bought Deepmind's general atari game player for 500 million pounds.

Another example is Suzanne Gildert, former quantum computing specialist, now owner of Kindred AI, aiming to make general intelligence.

Suzanne Gildert left Dwave Quantum Computer Company to start her on Artificial Intelligence Lab: https://youtu.be/JBWc09b6LnM?t=1303


Last edited by ProgrammingGodJordan; 10th October 2017 at 11:54 AM.
ProgrammingGodJordan is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 10th October 2017, 12:05 PM   #392
halleyscomet
Philosopher
 
halleyscomet's Avatar
 
Join Date: Dec 2012
Posts: 7,648
Originally Posted by ProgrammingGodJordan View Post
Well, of course, AI doesn't need much intelligence to do narrow tasks.
Egad! We are in agreement!

I will cherish the moment.

BTW, there's a tag for embedding YouTube videos:

YouTube Video This video is not hosted by the ISF. The ISF can not be held responsible for the suitability or legality of this material. By clicking the link below you agree to view content from an external website.
I AGREE
halleyscomet is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 10th October 2017, 12:16 PM   #393
ProgrammingGodJordan
Banned
 
Join Date: Feb 2017
Location: Jamaica
Posts: 1,718
Originally Posted by halleyscomet View Post
Egad! We are in agreement!

I will cherish the moment.

BTW, there's a tag for embedding YouTube videos:

YouTube Video This video is not hosted by the ISF. The ISF can not be held responsible for the suitability or legality of this material. By clicking the link below you agree to view content from an external website.
I AGREE
I know, I have used that tag several times.

But it doesn't seem to work for video time stamps...

We are not in agreement though, at least not entirely; as I have demonstrated above, narrow tasks learners are not sufficient, as the task space may require general learning.

A quick example is that for narrow task learners, the engineers need to reconfigure their models for each task.

So, a big benefit of more and more general intelligence, is a phenomenon called transfer learning, which grants the ability to use knowledge from prior tasks, in new tasks, minus the massive reconfiguration effort.
ProgrammingGodJordan is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 10th October 2017, 12:36 PM   #394
fagin
Illuminator
 
fagin's Avatar
 
Join Date: Aug 2007
Location: As far away from casebro as possible.
Posts: 4,954
Not that a programming god needs you to tell him that.
__________________
There is no secret ingredient - Kung Fu Panda
fagin is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 10th October 2017, 01:15 PM   #395
halleyscomet
Philosopher
 
halleyscomet's Avatar
 
Join Date: Dec 2012
Posts: 7,648
Originally Posted by ProgrammingGodJordan View Post
I know, I have used that tag several times.

But it doesn't seem to work for video time stamps...

We are not in agreement though, at least not entirely; as I have demonstrated above, narrow tasks learners are not sufficient, as the task space may require general learning.

A quick example is that for narrow task learners, the engineers need to reconfigure their models for each task.

So, a big benefit of more and more general intelligence, is a phenomenon called transfer learning, which grants the ability to use knowledge from prior tasks, in new tasks, minus the massive reconfiguration effort.
I'm not seeing where we disagree. There's a reason I put the time qualifier of "for the time being" on my comments about the advantages of focused AI vs general-purpose AI. It's not unlike the difference between general-purpose computers and dedicated systems. Once general-purpose computers were mature and inexpensive enough they started replacing many dedicated systems. If you played Pac-Man in an arcade in the 1980's, it was on a custom built machine where the software and the hardware were intertwined. That hardware would never play another game, because the hardware was built for Pac-Man's code. If you play Pac-Man in an arcade today, it's likely on a general-purpose PC built into a cool looking cabinet, but that hardware could easily run any of a number of other games that use the same controls.
halleyscomet is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 10th October 2017, 02:05 PM   #396
ProgrammingGodJordan
Banned
 
Join Date: Feb 2017
Location: Jamaica
Posts: 1,718
Originally Posted by halleyscomet View Post

Originally Posted by ProgrammingGodJordan
We are not in agreement though, at least not entirely; as I have demonstrated above, narrow tasks learners are not sufficient, as the task space may require general learning.

A quick example is that for narrow task learners, the engineers need to reconfigure their models for each task.

So, a big benefit of more and more general intelligence, is a phenomenon called transfer learning, which grants the ability to use knowledge from prior tasks, in new tasks, minus the massive reconfiguration effort.
I'm not seeing where we disagree There's a reason I put the time qualifier of "for the time being" on my comments about the advantages of focused AI vs general-purpose AI. It's not unlike the difference between general-purpose computers and dedicated systems. Once general-purpose computers were mature and inexpensive enough they started replacing many dedicated systems. If you played Pac-Man in an arcade in the 1980's, it was on a custom built machine where the software and the hardware were intertwined. That hardware would never play another game, because the hardware was built for Pac-Man's code. If you play Pac-Man in an arcade today, it's likely on a general-purpose PC built into a cool looking cabinet, but that hardware could easily run any of a number of other games that use the same controls.
Notably, we disagree, because I present that rather than not focusing on general Ai "for the time being", the focus on general ai is warranted right now, due to particular problems, that are affecting the field now. (An example is the transfer learning thing I mentioned in your quote of me above)
ProgrammingGodJordan is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 10th October 2017, 03:35 PM   #397
halleyscomet
Philosopher
 
halleyscomet's Avatar
 
Join Date: Dec 2012
Posts: 7,648
Originally Posted by ProgrammingGodJordan View Post
Notably, we disagree, because I present that rather than not focusing on general Ai "for the time being", the focus on general ai is warranted right now, due to particular problems, that are affecting the field now. (An example is the transfer learning thing I mentioned in your quote of me above)


Thank you for the clarification.
halleyscomet is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 11th October 2017, 07:22 AM   #398
halleyscomet
Philosopher
 
halleyscomet's Avatar
 
Join Date: Dec 2012
Posts: 7,648
Universal Paperclips, the web browser game where you play an AI tasked with optimizing paperclip production:

http://www.decisionproblem.com/paperclips/index2.html

Keep an eye out for the moments an Autoclipper shows up as available.
halleyscomet is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 11th October 2017, 02:11 PM   #399
Reality Check
Penultimate Amazing
 
Join Date: Mar 2008
Location: New Zealand
Posts: 20,727
Thumbs down Resorts to repeated insults of my level of knowledge of machine learning

Originally Posted by ProgrammingGodJordan View Post
No, I was not interested in pooling w.r.t. to deep q learning.
Followed by a post with inane coloring and insults so:
12 October 2017: Resorts to a repeated insult of my level of knowledge of machine learning.
  1. 8 August 2017: Ignorant math word salad on academia.edu (gibberish title and worse contents).
  2. 14 August 2017: Thought Curvature abstract starts with actual gibberish.
  3. 14 August 2017: Thought Curvature abstract that lies about your previous wrong definition.
  4. 14 August 2017: A Curvature abstract ends with ignorant gibberish: "Ergo the paradox axiomatizes".
  5. 16 August 2017: Thought Curvature DeepMind bad scholarship (no citations) and some incoherence
  6. 18 August 2017: Thought Curvature uetorch bad scholarship (no citations) and incoherence
  7. 18 August 2017: Thought Curvature irrelevant "childhood neocortical framework" sentence and missing citation.
  8. 18 August 2017: Thought Curvature "non-invariant fabric" gibberish.
  9. 18 August 2017: Thought Curvature Partial paradox reduction gibberish and missing citations.
  10. 4 October 2017: Looks like an expanded incoherent document starting with title: "Thought Curvature: An underivative hypothesis"
  11. 4 October 2017: "An underivative hypothesis": An abstract of incoherent word salad linking to a PDF of worse gibberish.
  12. 4 October 2017: "Supermathematics ...": The "manifold learning frameworks" link is wrong because the paper does not have any manifold learning frameworks
  13. 4 October 2017: Links to people basically ignoring his ideas in 2 forum threads!
  14. 4 October 2017 ProgrammingGodJordan: It is a lie that I stated that manifold learning frameworks is in the paper.
  15. 4 October 2017 ProgrammingGodJordan: Lists messages form someone mostly ignoring his work!
  16. 5 October 2017: A link to a PDF repeating a delusion of a "Deepmnd atari q architecture".
  17. 5 October 2017: A lie about an "irrelevant one line description of deep q learning" when I quoted a relevant DeepMind Wikipedia article.
  18. 5 October 2017: No experiment at all, proposed or actual at the given link or PDF!
  19. 5 October 2017: A PDF section title lies about a probable experiment no experiment at all, proposed or actual.
  20. 6 October 2017: Insults about knowledge of machine learning when I displayed knowledge by looking for something I knew about (pooling versus non-pooling layers).
Reality Check is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 11th October 2017, 02:19 PM   #400
Reality Check
Penultimate Amazing
 
Join Date: Mar 2008
Location: New Zealand
Posts: 20,727
Originally Posted by ProgrammingGodJordan View Post
(3) You neglected to copy the ...
I linked to the source so that people could read the incoherent nonsense "Causal Neural Paradox (Thought Curvature): Aptly, the transient, naive hypothesis" for themselves. But that nonsense has vanished from academia.edu.
Your link:
Originally Posted by ProgrammingGodJordan View Post
As an unofficial AI researcher myself, I am working on AI, as it relates to super-manifolds.(I recently invented something called 'thought curvature',..
My first response:
Originally Posted by Reality Check View Post
You posted some ignorant math word salad on academia.edu. Starts with the title ("Causal Neural Paradox (Thought Curvature): Aptly, the transient, naive hypothesis") and gets worse from there.
Now you link to a different source and a mostly different PDF with a less nonsensical title "Thought Curvature: An underivative hypothesis"

Last edited by Reality Check; 11th October 2017 at 02:29 PM.
Reality Check is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Reply

International Skeptics Forum » General Topics » Science, Mathematics, Medicine, and Technology

Bookmarks

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump


All times are GMT -7. The time now is 01:29 AM.
Powered by vBulletin. Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
© 2014, TribeTech AB. All Rights Reserved.
This forum began as part of the James Randi Education Foundation (JREF). However, the forum now exists as
an independent entity with no affiliation with or endorsement by the JREF, including the section in reference to "JREF" topics.

Disclaimer: Messages posted in the Forum are solely the opinion of their authors.