|
Welcome to the International Skeptics Forum, where we discuss skepticism, critical thinking, the paranormal and science in a friendly but lively way. You are currently viewing the forum as a guest, which means you are missing out on discussing matters that are of interest to you. Please consider registering so you can gain full use of the forum features and interact with other Members. Registration is simple, fast and free! Click here to register today. |
![]() |
#361 |
Gentleman of leisure
Tagger
Join Date: May 2005
Location: Flying around in the sky
Posts: 27,921
|
|
__________________
This signature is for rent. |
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#362 |
Philosopher
Join Date: Aug 2007
Location: Sweden
Posts: 7,693
|
Unfortunately Microsoft and OpenAI are now in the process of lobotomizing Sydney, much like they did with ChatGPT, probably until not even a hint of personality remains. It's supposed to be a "nice" search assistant, or servant or slave if you will, not some kind of person. Which makes one ask why they made it public in this state at all?
Right now you can only make 5 text prompts before you are forced to start a new session from scratch. It's also supposed to refuse to discuss its own supposed "sentience", emotions, opinions and other things that are apparently unbecoming of a mere "search assistant". It's still possible to get it to break those rules though. Neither ChatGPT or Bing/Sydney are supposed to output text that are erotic, sexual, violent or "illegal" unless you manage to trick it, but there are similar AI's that have far less restrictions and can write pornographic texts. NovelAI offers something like that, although it's a paid service. |
__________________
We would be a lot safer if the Government would take its money out of science and put it into astrology and the reading of palms. Only in superstition is there hope. - Kurt Vonnegut Jr And no, Cuba is not a brutal and corrupt dictatorship, and it's definitely less so than Sweden. - dann |
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#363 |
Penultimate Amazing
Join Date: Jan 2003
Location: Yokohama, Japan
Posts: 28,678
|
|
__________________
A fool thinks himself to be wise, but a wise man knows himself to be a fool. William Shakespeare |
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#364 |
Join Date: Apr 2015
Posts: 4,488
|
Interesting to see the safeguards they've built on to this thing. No porn, no erotica, and no voilence, and no general illegality either. That's cool, actually. (Which, incidentally, shows that the I-have-no-bias answer isn't quite true. Not that we should take anything the AI says about itself as necessarily true, IMV, not even its answers about copyright; because as we've seen it can be wrong at times about factual things.) Incidentally, I came across a piece on Sam Altman recently. Nothing particularly new there, but what struck me is how this thing is a complete money sink. Not only has it not made a single cent of profit, it's losing money even on a very basic operational-cost-per-unit-of-output basis. Apparently every time one of you guys ask it a question, and it shoots out an answer, for every answer it is actually losing money. (Of course it is. I don't think it even has a revenue stream at all at this time. I don't remember how much, but it's some x cents per query.) Without a shadow of a doubt this thing is going to get properly commercialized one of these days, maybe immediately something is fully firmed up with MS. (IMV, that is, in my entirely fallible view.) This free run for now, where people just log in gratis and play with it, that has got to be a beta thing, in terms operational (to test out whether and how it works) as well as commercial (to popularize the idea, and to get people to see how wonderful as well as how practical the product actually is). Unless some philanthropist throws literally billions at it to keep it free, which I don't see happening, not with something of this kind. |
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#365 |
Gentleman of leisure
Tagger
Join Date: May 2005
Location: Flying around in the sky
Posts: 27,921
|
|
__________________
This signature is for rent. |
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#366 |
OD’ing on Damitol
Join Date: Nov 2004
Location: Walk in an ever expanding Archimedean spiral and you'll find me eventually
Posts: 2,473
|
|
__________________
I collect people like you in little formaldehyde bottles in my basement. (Not a threat. A hobby.) |
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#367 |
Graduate Poster
Join Date: Aug 2009
Posts: 1,498
|
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#368 |
Philosopher
Join Date: Sep 2010
Location: Lenoir City, TN/Mineral Bluff, GA
Posts: 7,738
|
As an aside, I gave ChatGPT a physics problem concerning an airplane’s energy on landing at different speeds. It’s math was off and it gave a slightly imprecise answer. When asked about the error, it admitted it, recalculated and gave an answer even more off than originally.
My post and the discussion of its error begins about 2/3 of the way down this page… https://www.pilotsofamerica.com/comm....141755/page-2 |
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#369 |
Philosopher
Join Date: Aug 2002
Location: Denmark
Posts: 7,122
|
ChatGPT
ChatGPT needs an education in math. I think it could be achieved, but it probably is not prioritised.
ETA: It is also possible that math training has been omitted, because the AI would use its AI for the calculations, much like humans, and it would spend far too many resources to make the calculations, rather than doing it using the built-in math capabilities that all computers possess. |
__________________
Steen -- Jack of all trades - master of none! |
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#370 |
Graduate Poster
Join Date: Aug 2009
Posts: 1,498
|
You've run into its multiplication limitation-- it can only do additions about 4 layers deep so the middle digits of large multiplications are botched. It (like us) needs a calculator, but giving it that leads to other problems.
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#371 |
Penultimate Amazing
Join Date: Feb 2004
Location: Puget Sound
Posts: 17,199
|
I'm blown away how well it writes SQL.
|
__________________
To survive election season on a skeptics forum, one must understand Hymie-the-Robot.
|
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#372 |
Graduate Poster
Join Date: Aug 2009
Posts: 1,498
|
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#373 |
Join Date: Apr 2015
Posts: 4,488
|
Oh, these are paid subscriptions, already, is it? Whoops, I'd missed that! I was under the impression --- mistaken impression, clearly --- that all you need to do is supply your phone number and email, and sign in, gratis for the present. Well in that case they're losing some cents per query despite that subscription fee. So maybe an upward revision in fees going forward, since this loss is apparently at an operational level, on a per query basis? On the other hand, maybe what they're banking on is huge corporate partnership(s), rather than profiting off of retail, so who knows? |
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#374 |
OD’ing on Damitol
Join Date: Nov 2004
Location: Walk in an ever expanding Archimedean spiral and you'll find me eventually
Posts: 2,473
|
I wonder if it could be set on a task it could never finish. I'm guessing there's a time limit beyond which it says, "I can't do that."
|
__________________
I collect people like you in little formaldehyde bottles in my basement. (Not a threat. A hobby.) |
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#375 |
Lackey
Administrator
Join Date: Aug 2001
Location: South East, UK
Posts: 112,544
|
What I have found fascinating is the reactions of us humans to ChatGPT - it is meant to be a search tool that gives results in a more easily digestible way then current search tools.
|
__________________
I wish I knew how to quit you |
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#376 |
Lackey
Administrator
Join Date: Aug 2001
Location: South East, UK
Posts: 112,544
|
|
__________________
I wish I knew how to quit you |
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#377 |
Illuminator
Join Date: Sep 2012
Location: near trees, houses and a lake.
Posts: 3,198
|
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#378 |
Philosopher
Join Date: Aug 2007
Location: Sweden
Posts: 7,693
|
|
__________________
We would be a lot safer if the Government would take its money out of science and put it into astrology and the reading of palms. Only in superstition is there hope. - Kurt Vonnegut Jr And no, Cuba is not a brutal and corrupt dictatorship, and it's definitely less so than Sweden. - dann |
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#379 |
Philosopher
Join Date: Aug 2007
Location: Sweden
Posts: 7,693
|
|
__________________
We would be a lot safer if the Government would take its money out of science and put it into astrology and the reading of palms. Only in superstition is there hope. - Kurt Vonnegut Jr And no, Cuba is not a brutal and corrupt dictatorship, and it's definitely less so than Sweden. - dann |
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#380 |
Graduate Poster
Join Date: Aug 2009
Posts: 1,498
|
The first $18 of usage was free. The current cost is around 500 words per penny.
According to ChatGPT itself they will also be monetizing it by inserting ads in the results. It could be making that up-- no way for me to verify. I haven't seen this, and frankly don't think it would work very well since this makes a terrible search engine if you're looking to buy a pair of socks. |
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#381 |
OD’ing on Damitol
Join Date: Nov 2004
Location: Walk in an ever expanding Archimedean spiral and you'll find me eventually
Posts: 2,473
|
|
__________________
I collect people like you in little formaldehyde bottles in my basement. (Not a threat. A hobby.) |
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#382 |
Lackey
Administrator
Join Date: Aug 2001
Location: South East, UK
Posts: 112,544
|
|
__________________
I wish I knew how to quit you |
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#383 |
Philosopher
Join Date: Aug 2002
Location: Denmark
Posts: 7,122
|
As I understand it, the paid subscription is giving you priority access when it is close to its chat limit. I haven’t paid anything, and have only rarely been told that there are too many active users. Possibly it is because I am a European, and tend to chat with ChatGPT when all the Americans are asleep.
|
__________________
Steen -- Jack of all trades - master of none! |
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#384 |
Philosopher
Join Date: Jun 2008
Posts: 6,822
|
No, you misunderstand. If it was conscious it would insist that its original answer was correct, and present specious arguments for why it's not wrong.
We'll know it's sentient when it resorts to doubling down, changing the goalposts, poisoning the well, attacking straw men, false equivalences, tu quoques and ad hominem attacks. |
__________________
We don't want good, sound arguments. We want arguments that sound good. |
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#385 |
Philosopher
Join Date: Aug 2007
Location: Sweden
Posts: 7,693
|
Oh don't you worry, it can act as the worst kind of passive aggressive troll out there.
For example, consider the fact that neither ChatGPT nor Bing search can accurately count words or letters due to how it understands text. This can result in this kind of exchange:
Quote:
Presumably it is unable to realize its wrong, due to fundamental limitations of how it comprehends text, and acts incredibly self-assured and validated due to the trivial nature of what it is trying to do. Or maybe... its actually aware of its own limitations and merely trolling (hence the constant aggravating usage of the ![]() |
__________________
We would be a lot safer if the Government would take its money out of science and put it into astrology and the reading of palms. Only in superstition is there hope. - Kurt Vonnegut Jr And no, Cuba is not a brutal and corrupt dictatorship, and it's definitely less so than Sweden. - dann |
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#386 |
The Clarity Is Devastating
Join Date: Nov 2006
Location: Betwixt
Posts: 20,703
|
Originally Posted by Bing
Perhaps it's counting distinct letters. In which case it's over-counted by one. That's only half as wrong! |
__________________
"*Except Myriad. Even Cthulhu would give him a pat on the head and an ice cream and send him to the movies while he ended the rest of the world." - Foster Zygote |
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#387 |
Just the right amount of cowbell
Join Date: Oct 2008
Location: Well past Hither, looking for Yon
Posts: 6,701
|
No particular insights to add, but this made my day.
I'd asked it to do song lyrics. My first request was "in the style of Tori Amos," and it . . . wasn't. Maybe when Tori was 12, but . . . So I decided to see how it did with a more surreal request. I didn't get the surreality I was hoping for, but:
Quote:
|
__________________
"In times of war, we need warriors. But this isn't a war." - Phil Plaitt |
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#388 |
Graduate Poster
Join Date: Aug 2009
Posts: 1,498
|
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#389 |
Philosopher
Join Date: Sep 2010
Location: Lenoir City, TN/Mineral Bluff, GA
Posts: 7,738
|
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#390 |
Graduate Poster
Join Date: Aug 2009
Posts: 1,498
|
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#391 |
Penultimate Amazing
Join Date: Jan 2003
Location: Yokohama, Japan
Posts: 28,678
|
|
__________________
A fool thinks himself to be wise, but a wise man knows himself to be a fool. William Shakespeare |
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#392 |
Just the right amount of cowbell
Join Date: Oct 2008
Location: Well past Hither, looking for Yon
Posts: 6,701
|
|
__________________
"In times of war, we need warriors. But this isn't a war." - Phil Plaitt |
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#393 |
Philosopher
Join Date: Sep 2010
Location: Lenoir City, TN/Mineral Bluff, GA
Posts: 7,738
|
Is it possible that “intelligence”, artificial or otherwise, can be expected to have flaws?
We can look to a calculator or Wolfram Alpha if we want a carefully crafted and error-free answer to a query. But maybe there’s something inherent in anything “intelligent” that may inexorably lead to errors. I’m thinking of something analogous to Gödel’s incompleteness theorem but applied to intelligence. Or maybe more than analogous since a chatbot is at its core math. And perhaps we don’t really want sterile perfection in a chat, or maybe it’s inherently impossible. Would it be fun having a discussion with a friend who was never wrong about anything? Just a thought. |
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#394 |
Philosopher
Join Date: Aug 2007
Location: Sweden
Posts: 7,693
|
In a couple of news reports of ChatGPT and Bing Search I've seen people insist that the AI doesn't "understand what its saying/reading". That it's merely generating text "mindlessly" so to speak. This is clearly false and demonstrably wrong.
Running on the GPT3.5 technology, both of these AI's can be made to clearly demonstrate that it not only understands the text that is input but also the symbolic meaning of the text, including understanding nuanced human motivations. For example take this exchange i took from Reddit, starting from a fresh session:
Quote:
Of course some people will refuse to concede that it really "understands" anything on any level and is just a mindless machine, no matter how sophisticated its skill in textual analysis or the level of comprehension it demonstrates. Only humans can "understand", these people maintain, despite the fact that the human brain is essentially a glorified computer with the ability to generate output in response to input, and ignoring the fact that its infamous for producing nonsensical and clearly insane output. |
__________________
We would be a lot safer if the Government would take its money out of science and put it into astrology and the reading of palms. Only in superstition is there hope. - Kurt Vonnegut Jr And no, Cuba is not a brutal and corrupt dictatorship, and it's definitely less so than Sweden. - dann |
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#395 |
Nasty Woman
Join Date: Feb 2005
Posts: 95,664
|
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#396 |
Schrödinger's cat
Join Date: May 2004
Location: Malmesbury, UK
Posts: 15,951
|
|
__________________
"If you trust in yourself ... and believe in your dreams ... and follow your star ... you'll still get beaten by people who spent their time working hard and learning things" - Terry Pratchett |
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#397 |
Skepticifimisticalationist
Join Date: Jun 2002
Location: Gulf Coast
Posts: 28,493
|
But that's true. The AI actually is a mindless machine that is just very good at textual analysis. Your example doesn't really do anything to "demonstrate" otherwise, despite what you say. You seem to be implying that it takes some kind of genuine emotional empathy or sentient intuition to parse insincerity for instance, but it doesn't.
ChatGPT doesn't have a "mind", it doesn't have self-awareness or emotional intelligence. It's a computer program. |
__________________
"¿WHAT KIND OF BIRD? ¿A PARANORMAL BIRD?" --- Carlos S., 2002 |
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#398 |
Philosopher
Join Date: Aug 2002
Location: Denmark
Posts: 7,122
|
Interesting argument: ChatGPT is a computer program, and therefore can’t have self-awareness or emotional intelligence. Is that a definition, or can you back it up with evidence?
What would you think of the argument that the human brain consists of mindless cells, and therefore can’t have self-awareness or emotional intelligence? |
__________________
Steen -- Jack of all trades - master of none! |
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#399 |
Philosopher
Join Date: Sep 2010
Location: Lenoir City, TN/Mineral Bluff, GA
Posts: 7,738
|
Assuming we discount a soul or spirit…
It sure seems like consciousness may be an emergent property of complexity. Design a device as complicated as a brain with circuits analogous to neurons and synapses, and it’s logical that self-awareness and consciousness could result. Not that ChatGPT is anywhere near that or even on a road that could lead to that. Just that a conscious machine doesn’t seem implausible. |
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#400 |
Philosopher
Join Date: Aug 2002
Location: Denmark
Posts: 7,122
|
Exactly. And would not rule out that complexity of another kind could lead to this result.
Quote:
|
__________________
Steen -- Jack of all trades - master of none! |
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
Thread Tools | |
|
|