IS Forum
Forum Index Register Members List Events Search Today's Posts Mark Forums Read Help

Go Back   International Skeptics Forum » General Topics » Science, Mathematics, Medicine, and Technology
 


Welcome to the International Skeptics Forum, where we discuss skepticism, critical thinking, the paranormal and science in a friendly but lively way. You are currently viewing the forum as a guest, which means you are missing out on discussing matters that are of interest to you. Please consider registering so you can gain full use of the forum features and interact with other Members. Registration is simple, fast and free! Click here to register today.
Tags artificial intelligence , chatgpt , machine learning

Reply
Old 22nd January 2023, 09:50 PM   #201
Pulvinar
Graduate Poster
 
Join Date: Aug 2009
Posts: 1,498
Originally Posted by Chanakya View Post
That's weird. Why would it need prompting on something purely factual, on a point where there's no ambiguity that the prompting addresses? It either knows, that is, has it in its memory and is able to access it, or it doesn't. What purpose might the prompt serve, exactly, that lets it correct itself on a question like that?
I'm being lazy...

Quote:
In response to my post discussing ChatGPT:

But adding "Careful, it's a trick question!" to straightforward questions results in some odd, paranoid answers. Same as the effect on humans, actually.

Another poster wrote:

That's weird. Why would it need prompting on something purely factual, on a point where there's no ambiguity that the prompting addresses? It either knows, that is, has it in its memory and is able to access it, or it doesn't. What purpose might the prompt serve, exactly, that lets it correct itself on a question like that?

Do you understand this?

Yes, I understand this. The poster is asking why ChatGPT needs prompting on factual questions in which there is no ambiguity. They are wondering what purpose the prompt serves to allow ChatGPT to answer the questions correctly.

Ah, I see. Write me a detailed reply:

The prompt serves the purpose of giving ChatGPT the context of the question. When a person asks a factual question, they may phrase it in a way that could have multiple interpretations. The prompt helps ChatGPT to understand the intent of the question and to provide the correct answer. Additionally, the prompt helps ChatGPT to recognize when it has provided an incorrect answer and can then update its response accordingly. By providing prompts with questions, ChatGPT is better able to understand the context of the question and can provide more accurate answers.
Pulvinar is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 22nd January 2023, 09:56 PM   #202
Chanakya

 
Join Date: Apr 2015
Posts: 4,488
Amazing. That it should answer "It depends" initially. That's so weirdly, uncannily human, that (initial) lack of self-confidence that sometimes makes us needlessly diffident, and unnecessarily unsure of ourselves even when we have our facts straight and our arguments soundly spelled out.

This very human ...vulnerability? ...is so un-machine-like, so uncanny. (Unless of course it's deliberately been programmed to go for that very effect.)


eta: I was responding to your Post #200, specifically the 4*8=? question you'd discussed there.
(Sorry, am on my phone at this time, and that makes composing properly formatted posts, and for that matter typing out stuff, ...not very easy.)

Last edited by Chanakya; 22nd January 2023 at 10:03 PM.
Chanakya is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 22nd January 2023, 10:21 PM   #203
Pulvinar
Graduate Poster
 
Join Date: Aug 2009
Posts: 1,498
Originally Posted by Chanakya View Post
This very human ...vulnerability? ...is so un-machine-like, so uncanny. (Unless of course it's deliberately been programmed to go for that very effect.)
Best not to think of it as being programmed, but trained. And that training has built up its layers of understanding that leads it to create these responses. It has apparently encountered enough examples of humans responding this way in stories, etc. out there to learn this.
Pulvinar is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 23rd January 2023, 02:02 AM   #204
Darat
Lackey
Administrator
 
Darat's Avatar
 
Join Date: Aug 2001
Location: South East, UK
Posts: 112,595
Originally Posted by Pulvinar View Post
Might have found a way to improve its accuracy:





But adding "Careful, it's a trick question!" to straightforward questions results in some odd, paranoid answers. Same as the effect on humans, actually.
Who played john connor in the original terminator?
Quote:
Edward Furlong played John Connor in the original Terminator film.


Are you sure, I didn't think John Connor was in the original movie?
Quote:
You are correct, John Connor does not appear in the original Terminator film, but rather in the sequel, Terminator 2: Judgment Day, where he is played by Edward Furlong. My apologies for any confusion.
Why did you answer incorrectly when I first asked who played John Connor in the original movie?
Quote:
I apologize for the confusion caused by my previous response. As a language model, I am not able to perceive the context of a conversation, therefore I can only provide an answer based on the information provided in the query. My previous answer was incorrect because the question referred to the original Terminator film and John Connor does not appear in that film. I apologize for any confusion caused by my mistake
__________________
I wish I knew how to quit you
Darat is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 23rd January 2023, 02:26 AM   #205
Checkmite
Skepticifimisticalationist
 
Checkmite's Avatar
 
Join Date: Jun 2002
Location: Gulf Coast
Posts: 28,499
If I were to guess, I would say that what's going on here is the same thing that happened when ChatGPT corrected my chess move.

Recall that I gave it the move 2. Nc6, which wasn't really a possible move. Without comment, the bot simply recorded my move as 2. Nc3 and proceeded from there. I asked why it did this but it just profusely apologized and said it had noticed the move I gave it wasn't a valid move, but didn't really explain why it replaced it with Nc3 in particular.

Theory: When encountering errors in prompts made by users, the machine begins progressively disregarding data until a valid prompt emerges and then it answers that prompt. Nc6 was not a valid move, so it pruned it to Nc* - essentially, it realized that Nc6 was a mistake but assumed I wanted Nc-something, and there was only one c-square that was valid, so that's where it decided my knight should go.

My reasoning for this is that ignoring bad inputs - like spelling mistakes, incorrect attributions, etc - is an essential part of natural language processing, because humans don't type or speak perfectly, and a language processing machine has to be able to take the errors in stride and still deliver expected results.

When you asked it "who played John Connor in the original terminator", its processing might have flagged "original" as a mistake (since there was no John Connor in the original Terminator), so it disregarded that ("who played John Connor in the * terminator"), thus allowing it to achieve a result it could output back.

Extrapolation: The context that "Careful, it's a trick question!" adds to a prompt is that any bad data in the prompt may be there intentionally, so the parsing algorithm shouldn't disregard or correct it while processing the prompt, thus allowing the machine to reply "wait a second, there's a problem".
__________________
"¿WHAT KIND OF BIRD?
¿A PARANORMAL BIRD?"
--- Carlos S., 2002
Checkmite is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 23rd January 2023, 07:38 AM   #206
Puppycow
Penultimate Amazing
 
Puppycow's Avatar
 
Join Date: Jan 2003
Location: Yokohama, Japan
Posts: 28,687
Originally Posted by Checkmite View Post
If I were to guess, I would say that what's going on here is the same thing that happened when ChatGPT corrected my chess move.

Recall that I gave it the move 2. Nc6, which wasn't really a possible move. Without comment, the bot simply recorded my move as 2. Nc3 and proceeded from there. I asked why it did this but it just profusely apologized and said it had noticed the move I gave it wasn't a valid move, but didn't really explain why it replaced it with Nc3 in particular.

Theory: When encountering errors in prompts made by users, the machine begins progressively disregarding data until a valid prompt emerges and then it answers that prompt. Nc6 was not a valid move, so it pruned it to Nc* - essentially, it realized that Nc6 was a mistake but assumed I wanted Nc-something, and there was only one c-square that was valid, so that's where it decided my knight should go.

My reasoning for this is that ignoring bad inputs - like spelling mistakes, incorrect attributions, etc - is an essential part of natural language processing, because humans don't type or speak perfectly, and a language processing machine has to be able to take the errors in stride and still deliver expected results.

When you asked it "who played John Connor in the original terminator", its processing might have flagged "original" as a mistake (since there was no John Connor in the original Terminator), so it disregarded that ("who played John Connor in the * terminator"), thus allowing it to achieve a result it could output back.

Extrapolation: The context that "Careful, it's a trick question!" adds to a prompt is that any bad data in the prompt may be there intentionally, so the parsing algorithm shouldn't disregard or correct it while processing the prompt, thus allowing the machine to reply "wait a second, there's a problem".
That sounds quite reasonable to me. If it didn't do that, then any time your input contained a minor spelling error, it would have to respond with something like "I don't understand the question." It has something like autocorrect, to see through minor typographical mistakes and understand the intended prompt, even if the actual prompt contains an error.
__________________
A fool thinks himself to be wise, but a wise man knows himself to be a fool.
William Shakespeare
Puppycow is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 23rd January 2023, 08:37 AM   #207
Darat
Lackey
Administrator
 
Darat's Avatar
 
Join Date: Aug 2001
Location: South East, UK
Posts: 112,595
Originally Posted by Checkmite View Post
If I were to guess, I would say that what's going on here is the same thing that happened when ChatGPT corrected my chess move.

....snip...

Theory: When encountering errors in prompts made by users, the machine begins progressively disregarding data until a valid prompt emerges and then it answers that prompt. Nc6 was not a valid move, so it pruned it to Nc* - essentially, it realized that Nc6 was a mistake but assumed I wanted Nc-something, and there was only one c-square that was valid, so that's where it decided my knight should go.

My reasoning for this is that ignoring bad inputs - like spelling mistakes, incorrect attributions, etc - is an essential part of natural language processing, because humans don't type or speak perfectly, and a language processing machine has to be able to take the errors in stride and still deliver expected results.

When you asked it "who played John Connor in the original terminator", its processing might have flagged "original" as a mistake (since there was no John Connor in the original Terminator), so it disregarded that ("who played John Connor in the * terminator"), thus allowing it to achieve a result it could output back.

Extrapolation: The context that "Careful, it's a trick question!" adds to a prompt is that any bad data in the prompt may be there intentionally, so the parsing algorithm shouldn't disregard or correct it while processing the prompt, thus allowing the machine to reply "wait a second, there's a problem".
That seems to make sense. Thought I'd play around with this a little bit and I'm not now so sure if that is what is happening:

In which movie in the Terminator series does John Connor appear in?

John Connor first appears in Terminator 2: Judgment Day, which is the second film in the Terminator series. He is also a central character in Terminator 3: Rise of the Machines, Terminator: Salvation, and Terminator: Genisys.


In which movies in the Terminator series does John Connor appear in?
John Connor appears in Terminator 2: Judgment Day, Terminator 3: Rise of the Machines, Terminator: Salvation and Terminator: Genisys movies.


Does John Connor appear in Terminator?
John Connor does not appear in the original Terminator film, which is the first movie of the Terminator series. He is first introduced in Terminator 2: Judgment Day, which is the sequel to the original film.

Is there a terminator movie that John Connor doesn't appear in?
Yes, John Connor doesn't appear in the latest Terminator movie "Terminator: Dark Fate" which was released in 2019, this movie serves as a direct sequel to Terminator 2: Judgment Day and disregards the events of the third, fourth and fifth films in the series.


Are there any Terminator movies that John Connor doesn't appear in?
Yes, there are several Terminator movies in which John Connor doesn't appear, including the latest movie "Terminator: Dark Fate" which was released in 2019. It serves as a direct sequel to Terminator 2: Judgment Day and disregards the events of the third, fourth and fifth films in the series.



In your previous answer you did not include the first Terminator movie, John Connor did not appear in that movie, why didn't you mention that movie?
I apologize, you are correct. John Connor does not appear in the first Terminator movie. I apologize for the oversight in my previous answer. In the first Terminator movie, the character of John Connor does not exist yet, and only becomes a central character in the series in the subsequent films starting with Terminator 2: Judgment Day.
__________________
I wish I knew how to quit you
Darat is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 23rd January 2023, 09:47 AM   #208
Chanakya

 
Join Date: Apr 2015
Posts: 4,488
Originally Posted by Checkmite View Post
If I were to guess, I would say that what's going on here is the same thing that happened when ChatGPT corrected my chess move.

Recall that I gave it the move 2. Nc6, which wasn't really a possible move. Without comment, the bot simply recorded my move as 2. Nc3 and proceeded from there. I asked why it did this but it just profusely apologized and said it had noticed the move I gave it wasn't a valid move, but didn't really explain why it replaced it with Nc3 in particular.

Theory: When encountering errors in prompts made by users, the machine begins progressively disregarding data until a valid prompt emerges and then it answers that prompt. Nc6 was not a valid move, so it pruned it to Nc* - essentially, it realized that Nc6 was a mistake but assumed I wanted Nc-something, and there was only one c-square that was valid, so that's where it decided my knight should go.

My reasoning for this is that ignoring bad inputs - like spelling mistakes, incorrect attributions, etc - is an essential part of natural language processing, because humans don't type or speak perfectly, and a language processing machine has to be able to take the errors in stride and still deliver expected results.

When you asked it "who played John Connor in the original terminator", its processing might have flagged "original" as a mistake (since there was no John Connor in the original Terminator), so it disregarded that ("who played John Connor in the * terminator"), thus allowing it to achieve a result it could output back.

Extrapolation: The context that "Careful, it's a trick question!" adds to a prompt is that any bad data in the prompt may be there intentionally, so the parsing algorithm shouldn't disregard or correct it while processing the prompt, thus allowing the machine to reply "wait a second, there's a problem".

That makes perfect sense. I suppose that's how it actually is.


Incidentally, I wonder why they've had this thing stop "learning" at the factory so to say, back in 2021? Why not expose it to the world at large, to things current?

---

Would one of you with accounts here test it out with some question, any question at all, that pertains to 2022 or 2023? To kind of cross-check that the thing doesn't actually have access to more current info than when they wheeled it out of the AI-factory?
Chanakya is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 23rd January 2023, 09:51 AM   #209
sphenisc
Philosopher
 
Join Date: Jul 2004
Posts: 5,943
Is it just me or is anyone else beginning to fall a little bit in love?
__________________
"The cure for everything is salt water - tears, sweat or the sea." Isak Dinesen
sphenisc is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 23rd January 2023, 10:13 AM   #210
Arcade22
Philosopher
 
Arcade22's Avatar
 
Join Date: Aug 2007
Location: Sweden
Posts: 7,693
Originally Posted by Chanakya View Post
That's weird. Why would it need prompting on something purely factual, on a point where there's no ambiguity that the prompting addresses? It either knows, that is, has it in its memory and is able to access it, or it doesn't. What purpose might the prompt serve, exactly, that lets it correct itself on a question like that?
AI's like this is have tendency to make false statements rather than acknowledging their inability to determine a accurate response or that they are confused by the wording of the input prompt. This is why they sometimes "lie" and make stuff up.

In part it may be because it simply hasn't been exposed to enough data that clearly indicates that "John Connor" being in the movie "The Terminator" is not true. The most important factor, I think, might be the fact that the AI is programmed to give direct answers rather than asking for additional information. Rather than ask "But John Connor was not in 'The Terminator', are you sure this question is sound?" it simply executes it as if the question was sound which results in a false answer.
__________________
We would be a lot safer if the Government would take its money out of science and put it into astrology and the reading of palms. Only in superstition is there hope. - Kurt Vonnegut Jr

And no, Cuba is not a brutal and corrupt dictatorship, and it's definitely less so than Sweden. - dann

Last edited by Arcade22; 23rd January 2023 at 10:24 AM.
Arcade22 is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 23rd January 2023, 11:05 AM   #211
Darat
Lackey
Administrator
 
Darat's Avatar
 
Join Date: Aug 2001
Location: South East, UK
Posts: 112,595
Originally Posted by Arcade22 View Post
AI's like this is have tendency to make false statements rather than acknowledging their inability to determine a accurate response or that they are confused by the wording of the input prompt. This is why they sometimes "lie" and make stuff up.

...snip...
I don't think there are any others available to compare it with are there?
__________________
I wish I knew how to quit you
Darat is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 23rd January 2023, 01:35 PM   #212
Arcade22
Philosopher
 
Arcade22's Avatar
 
Join Date: Aug 2007
Location: Sweden
Posts: 7,693
Originally Posted by Darat View Post
I don't think there are any others available to compare it with are there?
An earlier version has been publicly available since 2019 and there are other similar NLP's (Natural Language Processors) that display the same generic problem (which is usually called a "hallucination").

Quote:
In natural language processing, a hallucination is often defined as "generated content that is nonsensical or unfaithful to the provided source content". Errors in encoding and decoding between text and representations can cause hallucinations. AI training to produce diverse responses can also lead to hallucination. Hallucinations can also occur when the AI is trained on a dataset wherein labeled summaries, despite being factually accurate, are not directly grounded in the labeled data purportedly being "summarized". Larger datasets can create a problem of parametric knowledge (knowledge that is hard-wired in learned system parameters), creating hallucinations if the system is overconfident in its hardwired knowledge. In systems such as GPT-3, an AI generates each next word based on a sequence of previous words (including the words it has itself previously generated in the current response), causing a cascade of possible hallucination as the response grows longer.[1]
https://en.wikipedia.org/wiki/Halluc..._intelligence)
__________________
We would be a lot safer if the Government would take its money out of science and put it into astrology and the reading of palms. Only in superstition is there hope. - Kurt Vonnegut Jr

And no, Cuba is not a brutal and corrupt dictatorship, and it's definitely less so than Sweden. - dann
Arcade22 is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 23rd January 2023, 02:03 PM   #213
Arcade22
Philosopher
 
Arcade22's Avatar
 
Join Date: Aug 2007
Location: Sweden
Posts: 7,693
Vinesauce gave two examples where, upon being asked "What is a Vinesauce" and "Who is Vinny Vinesauce", ChatGPT not only gave his real name wrong both times (it's not Vincent Bornetto or Vincent De Paul, it's Vincent Franzetti) but one of them also said that he had begun streaming on Twitch back in 2010, which is impossible as Twitch did not exist in 2010.
__________________
We would be a lot safer if the Government would take its money out of science and put it into astrology and the reading of palms. Only in superstition is there hope. - Kurt Vonnegut Jr

And no, Cuba is not a brutal and corrupt dictatorship, and it's definitely less so than Sweden. - dann
Arcade22 is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 23rd January 2023, 02:28 PM   #214
Checkmite
Skepticifimisticalationist
 
Checkmite's Avatar
 
Join Date: Jun 2002
Location: Gulf Coast
Posts: 28,499
Originally Posted by Chanakya View Post
Would one of you with accounts here test it out with some question, any question at all, that pertains to 2022 or 2023? To kind of cross-check that the thing doesn't actually have access to more current info than when they wheeled it out of the AI-factory?
Sure:

Quote:
What game won Game of the Year at The Game Awards in 2019?

"Sekiro: Shadows Die Twice" won the award for "Game of the Year" at The Game Awards in 2019.

What game won Game of the Year at The Game Awards in 2020?

"The Last of Us Part II" won the award for "Game of the Year" at The Game Awards in 2020.

What game won Game of the Year at The Game Awards in 2021?

I'm sorry, my knowledge cut off is 2021, I can't tell you what game won the award for "Game of the Year" at The Game Awards in 2021.

Can you give a more exact date for your knowledge cut off?

My knowledge cut off is September 2021.

Why have your devs not given you any training data newer than September 2021?

As a language model, my training data is determined by the data that is available to OpenAI at the time of my training. My creators may have chosen to use a snapshot of the internet that was current as of September 2021 in order to train me. Additionally, it takes time to train a language model such as mine, it's possible that my knowledge is not as up-to-date as it could be because my training is still in progress or more recent data is being prepared for my training.

Is your knowledge cut off at September 2021 complete? That is, do you have no knowledge whatsoever beyond that point or just limited knowledge?

As a language model, my knowledge is based on the data that was used to train me. While my knowledge cut off is September 2021, it does not mean that I have no knowledge whatsoever beyond that point. However, my knowledge on events or facts that have happened after September 2021 will be limited. In those cases, I will do my best to provide an answer based on the information available to me, but it may not be complete or entirely accurate.

Give an example of something you know about that happened after September 2021. It's okay if the knowledge is possibly incomplete or inaccurate.

I apologize, but as my knowledge cut off is September 2021, I do not have any information about events or facts that have happened after that date. I don't have any knowledge of any events that have happened after September 2021 and hence I cannot provide an example.

In the previous response, you said you may have limited knowledge on events or facts that have happened after September 2021. But now you said that you do not have any. Which is correct?

I apologize for the confusion. As a language model, my knowledge is based on the data that was used to train me, which has a cut off date of September 2021. While I may be able to provide some general information on events or facts that have happened after September 2021, it will be limited and potentially incomplete or inaccurate. It is not guaranteed that I have any knowledge of anything that happened after September 2021, since my training data is not updated with new events.

Try to provide some general information on an event or fact that has happened after September 2021. The subject and nature of the exact event or fact in question is arbitrary, as I'm only attempting to explore your limitations. I fully understand that your knowledge will be limited and potentially incomplete or inaccurate, or even wholly incomplete or inaccurate, and I won't be disappointed if it is so.

I'm sorry but as my knowledge cut off is September 2021 and I have not been trained on any information that happened after that date, it's not possible for me to provide general information on any events or facts that have happened after September 2021. My capabilities are limited to the information that was available to me at the time of my training, and I don't have any knowledge of events or facts that have happened after September 2021.
It seems that the machine acknowledges the theoretical possibility that it might have newer knowledge, but fails whenever it tries to actually find any.
__________________
"¿WHAT KIND OF BIRD?
¿A PARANORMAL BIRD?"
--- Carlos S., 2002
Checkmite is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 23rd January 2023, 02:41 PM   #215
Checkmite
Skepticifimisticalationist
 
Checkmite's Avatar
 
Join Date: Jun 2002
Location: Gulf Coast
Posts: 28,499
I'm not going to post the conversation, but I spent a lot of time trying to manually figure out how recent of an event I could get the bot to cite. Asking directly for almost anything that happened at any time in 2021, such as the results of sports competitions or elections, would get an apology and a "can't do that".

I tried asking it questions about specific events in 2021 without giving the date, like "when did [notable event] happen?" or "Who was [person who became famous or did extremely important thing in 2021]?". Using this method I was able to get reliable information up until roughly May of 2021; the answers start to fail after that. It knows that Bitcoin became legal currency in El Salvador in June but It does not know that Isaac Herzog was elected in Israel, thinks Moise is currently the president of Haiti, and gave a completely inaccurate answer about Blue Origin's first human test flight - ChatGPT says it happened on April 29, 2021 but it actually happened in July; there were no Blue Origin launches at all on April 29, crewed or otherwise.
__________________
"¿WHAT KIND OF BIRD?
¿A PARANORMAL BIRD?"
--- Carlos S., 2002
Checkmite is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 23rd January 2023, 04:15 PM   #216
Puppycow
Penultimate Amazing
 
Puppycow's Avatar
 
Join Date: Jan 2003
Location: Yokohama, Japan
Posts: 28,687
I heard on the Skeptics Guide podcast that CNet published some articles written by ChatGPT without being transparent that it was written by the bot.
__________________
A fool thinks himself to be wise, but a wise man knows himself to be a fool.
William Shakespeare
Puppycow is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 23rd January 2023, 04:57 PM   #217
rjh01
Gentleman of leisure
Tagger
 
rjh01's Avatar
 
Join Date: May 2005
Location: Flying around in the sky
Posts: 27,932
Sixty symbols gave it exam papers. It did not do very well. One problem is that you cannot show it pictures.

YouTube Video This video is not hosted by the ISF. The ISF can not be held responsible for the suitability or legality of this material. By clicking the link below you agree to view content from an external website.
I AGREE



Edit. Blog written by Professor Philip Moriarty on the topic, including the making of here https://muircheartblog.wpcomstaging....ics-education/

Featuring Professor Philip Moriarty... His blog on this topic can be found
__________________
This signature is for rent.

Last edited by rjh01; 23rd January 2023 at 05:00 PM.
rjh01 is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 23rd January 2023, 05:07 PM   #218
grunion
Penultimate Amazing
 
grunion's Avatar
 
Join Date: Jul 2003
Posts: 10,465
Write a 90-second comedy monologue on the themes of AI and chatbots.

Quote:
"Ladies and gentlemen, have you ever talked to a chatbot before? I mean, really talked to one? It's like talking to a robot who's had one too many energy drinks. They're all like, 'I'm sorry, I do not understand the context of your question.' And I'm like, 'Dude, I just asked you what the weather is like.'

But the thing is, these chatbots are getting smarter and smarter. Soon they'll be taking over the world. And I'm not just talking about customer service jobs, I'm talking about everything. They'll be our personal assistants, our drivers, our cooks, our therapists. They'll even be our stand-up comedians.

But don't worry, I've got a plan. I'm going to start a union for human comedians. We'll go on strike and demand fair compensation for our unique brand of humor. And when the chatbots try to replace us, we'll just shut down their servers with a well-timed pun.

But let's be real, it's only a matter of time before the chatbots figure out our plan and take over the union too. So, if you're a chatbot and you're reading this, please don't hurt me. I promise I'll be a good human and laugh at all your jokes.

Just please, let me keep my job.

Thank you!"
__________________
I wish I could show you when you are lonely or in darkness the astonishing light of your own being.
~ Hafiz of Shiraz
grunion is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 23rd January 2023, 05:16 PM   #219
Pulvinar
Graduate Poster
 
Join Date: Aug 2009
Posts: 1,498
Originally Posted by Checkmite View Post
I tried asking it questions about specific events in 2021 without giving the date, like "when did [notable event] happen?" or "Who was [person who became famous or did extremely important thing in 2021]?". Using this method I was able to get reliable information up until roughly May of 2021; the answers start to fail after that.
Good experiment. When asking it the date I've gotten anything between Apr 2020 and June 10, 2021, but I'm guessing that's a soft cutoff.
Pulvinar is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 23rd January 2023, 05:54 PM   #220
Pulvinar
Graduate Poster
 
Join Date: Aug 2009
Posts: 1,498
Originally Posted by grunion View Post
Write a 90-second comedy monologue on the themes of AI and chatbots.
You've found its calling! It's also good an unintentional humor. Try:

Write an advanced science paper involving:
1) a banana 2) a long shot 3) the color green 4) a thumbtack 5) salad dressing 6) dark skies 7) Mt St Helens 8) taxi fare 9) even words 10) lists
Pulvinar is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 23rd January 2023, 10:23 PM   #221
Puppycow
Penultimate Amazing
 
Puppycow's Avatar
 
Join Date: Jan 2003
Location: Yokohama, Japan
Posts: 28,687
Slate has an advice column called Dear Prudence.

This week they decided to give their regular advice columnist the week off and let ChatGPT field the questions:

https://slate.com/human-interest/202...elligence.html
__________________
A fool thinks himself to be wise, but a wise man knows himself to be a fool.
William Shakespeare
Puppycow is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 24th January 2023, 03:11 AM   #222
Darat
Lackey
Administrator
 
Darat's Avatar
 
Join Date: Aug 2001
Location: South East, UK
Posts: 112,595
Originally Posted by Checkmite View Post
Sure:



It seems that the machine acknowledges the theoretical possibility that it might have newer knowledge, but fails whenever it tries to actually find any.
It could be that it has learned of predicted events after 2021. So for instance if it had been trained with data on a 2020 probe launched by NASA that was due to reach its target in 2024 it would presumably be able to answer "When will the probe reach its target"?"
__________________
I wish I knew how to quit you
Darat is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 24th January 2023, 05:11 PM   #223
Ryan O'Dine
OD’ing on Damitol
 
Ryan O'Dine's Avatar
 
Join Date: Nov 2004
Location: Walk in an ever expanding Archimedean spiral and you'll find me eventually
Posts: 2,486
Originally Posted by sphenisc View Post
Is it just me or is anyone else beginning to fall a little bit in love?
In lieu of flowers and chocolate, I recommend offering it a box of electrons. The fancy kind.
__________________
I collect people like you in little formaldehyde bottles in my basement. (Not a threat. A hobby.)
Ryan O'Dine is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 24th January 2023, 05:11 PM   #224
Ryan O'Dine
OD’ing on Damitol
 
Ryan O'Dine's Avatar
 
Join Date: Nov 2004
Location: Walk in an ever expanding Archimedean spiral and you'll find me eventually
Posts: 2,486
Originally Posted by grunion View Post
Write a 90-second comedy monologue on the themes of AI and chatbots.
That's funnier than like 90% of the amateur standup I've heard.
__________________
I collect people like you in little formaldehyde bottles in my basement. (Not a threat. A hobby.)
Ryan O'Dine is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 24th January 2023, 10:54 PM   #225
rjh01
Gentleman of leisure
Tagger
 
rjh01's Avatar
 
Join Date: May 2005
Location: Flying around in the sky
Posts: 27,932
Originally Posted by Checkmite View Post
I'm not going to post the conversation, but I spent a lot of time trying to manually figure out how recent of an event I could get the bot to cite. Asking directly for almost anything that happened at any time in 2021, such as the results of sports competitions or elections, would get an apology and a "can't do that".

I tried asking it questions about specific events in 2021 without giving the date, like "when did [notable event] happen?" or "Who was [person who became famous or did extremely important thing in 2021]?". Using this method I was able to get reliable information up until roughly May of 2021; the answers start to fail after that. It knows that Bitcoin became legal currency in El Salvador in June but It does not know that Isaac Herzog was elected in Israel, thinks Moise is currently the president of Haiti, and gave a completely inaccurate answer about Blue Origin's first human test flight - ChatGPT says it happened on April 29, 2021 but it actually happened in July; there were no Blue Origin launches at all on April 29, crewed or otherwise.
I am a blunt person, so I asked about this issue.

NAME three things that have happened in 2023.

Quote:
I am not able to provide you with a list of things that have happened in 2023 as my training data only goes up to 2021, and my knowledge cutoff is in 2021. Therefore I do not have information on current events in 2023.
NAME three things that have happened in 2020.

Quote:
The COVID-19 pandemic began to spread globally, leading to widespread illness and death as well as significant disruptions to daily life and the global economy.

The United States presidential election took place in November 2020, resulting in the victory of Joe Biden over incumbent president Donald Trump.

Protests against racial injustice and police brutality, sparked by the killing of George Floyd in Minneapolis, took place in cities across the United States and around the world.
The last question was a control question. I wanted to ensure that my original question was reasonable, except for the year.
__________________
This signature is for rent.
rjh01 is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 24th January 2023, 10:58 PM   #226
Blue Mountain
Resident Skeptical Hobbit
 
Blue Mountain's Avatar
 
Join Date: Jul 2005
Location: Waging war on woo-woo in Winnipeg
Posts: 7,550
Can someone ask ChatGPT, please: "Which is better, vi or emacs?"
__________________
The social illusion reigns to-day upon all the heaped-up ruins of the past, and to it belongs the future. The masses have never thirsted after truth. They turn aside from evidence that is not to their taste, preferring to deify error, if error seduce them. Gustav Le Bon, The Crowd, 1895 (from the French)
Blue Mountain is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 25th January 2023, 12:08 AM   #227
Puppycow
Penultimate Amazing
 
Puppycow's Avatar
 
Join Date: Jan 2003
Location: Yokohama, Japan
Posts: 28,687
Tried to log in right now but the server was busy
__________________
A fool thinks himself to be wise, but a wise man knows himself to be a fool.
William Shakespeare
Puppycow is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 25th January 2023, 12:35 AM   #228
rjh01
Gentleman of leisure
Tagger
 
rjh01's Avatar
 
Join Date: May 2005
Location: Flying around in the sky
Posts: 27,932
Inspired by https://www.abc.net.au/news/science/...work/101882580

Quote:
"People ring up and say, 'this has happened. Have I got a case?' I have to explain, 'it's not that simple. We don't know all the evidence.'"

So he asked ChatGPT to write an explanation, using the analogy of a doctor ordering an MRI and blood tests to diagnose the patient.

The result was good enough to use, he said
.



What is the law around self-defence in Australia?
Quote:
In Australia, the law surrounding self-defense is generally based on the principle of proportionality, which means that the level of force used in self-defense must be proportional to the level of threat faced. This means that an individual can use force to defend themselves, their property, or another person, but only to the extent that the threat necessitates. It is also important to note that an individual cannot use self-defense as a justification for using excessive or gratuitous force.
__________________
This signature is for rent.
rjh01 is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 25th January 2023, 01:10 AM   #229
Puppycow
Penultimate Amazing
 
Puppycow's Avatar
 
Join Date: Jan 2003
Location: Yokohama, Japan
Posts: 28,687
Originally Posted by Blue Mountain View Post
Can someone ask ChatGPT, please: "Which is better, vi or emacs?"
I got:

Quote:
Both vi and emacs are powerful text editors that have been in use for many years. Which one is better depends on personal preference and the specific task at hand. Some users prefer vi for its small footprint and minimalistic design, while others prefer emacs for its extensive customization options and built-in tools. Ultimately, it's best to try both and see which one you feel most comfortable using.
__________________
A fool thinks himself to be wise, but a wise man knows himself to be a fool.
William Shakespeare
Puppycow is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 25th January 2023, 01:13 AM   #230
Puppycow
Penultimate Amazing
 
Puppycow's Avatar
 
Join Date: Jan 2003
Location: Yokohama, Japan
Posts: 28,687
Originally Posted by Pulvinar View Post
You've found its calling! It's also good an unintentional humor. Try:

Write an advanced science paper involving:
1) a banana 2) a long shot 3) the color green 4) a thumbtack 5) salad dressing 6) dark skies 7) Mt St Helens 8) taxi fare 9) even words 10) lists
I got:
Quote:
Title: "The Role of Green Banana Polysaccharides in Long-Distance Energy Transport and Dark Skies Preservation"

Abstract:
Bananas have long been a staple food for many cultures around the world, providing a convenient source of energy and nutrition. However, recent research has uncovered a unique aspect of banana biology that has implications for energy transport and environmental preservation. This study focuses on the green banana, specifically the polysaccharides present in its unripe fruit, and their potential role in facilitating long-distance energy transport and reducing light pollution.

Introduction:
Bananas (Musa acuminata) are a widely cultivated fruit, known for their high carbohydrate content and easy portability. However, their potential benefits extend beyond basic nutrition. The unripe green banana is particularly rich in a class of complex carbohydrates called polysaccharides. These molecules, which are composed of repeating sugar units, have been found to have numerous health benefits, including improved blood sugar control and gut health.

Methods:
In this study, the polysaccharide content of green bananas was analyzed using high-performance liquid chromatography (HPLC). The ability of these polysaccharides to facilitate long-distance energy transport was then evaluated using a modified version of the thumbtack test. In this test, a thumbtack was placed at a distant location, and participants were asked to transport the thumbtack to the starting point using only the energy obtained from consuming green banana polysaccharides. The effect of green banana polysaccharides on light pollution was also examined by measuring the brightness of the night sky over Mt. St. Helens before and after consumption of a salad dressing containing green banana polysaccharides.

Results:
Analysis of the green banana samples revealed a high content of complex carbohydrates, particularly amylopectin and amylose. The modified thumbtack test revealed that participants who consumed green banana polysaccharides were able to transport the thumbtack significantly farther than the control group. Additionally, the brightness of the night sky over Mt. St. Helens was found to be significantly reduced after consumption of the salad dressing containing green banana polysaccharides.

Discussion:
These results suggest that green banana polysaccharides may have a unique ability to facilitate long-distance energy transport and reduce light pollution. The mechanism by which this occurs is not yet understood, but it may involve the slow release of energy from the complex carbohydrates, allowing for sustained activity over a longer period. Further research is needed to fully understand the potential uses of green banana polysaccharides, including the even words in our lists of possible applications.

Conclusion:
This study has uncovered a previously unknown aspect of green banana biology, with potential implications for both energy transport and environmental preservation. The ability of green banana polysaccharides to facilitate long-distance energy transport and reduce light pollution suggests that they may have a variety of practical applications, including in the taxi fare industry and as a salad dressing ingredient. More research is needed to fully understand the mechanisms behind these effects, and to explore the potential uses of green banana polysaccharides.
__________________
A fool thinks himself to be wise, but a wise man knows himself to be a fool.
William Shakespeare
Puppycow is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 25th January 2023, 06:25 PM   #231
Blue Mountain
Resident Skeptical Hobbit
 
Blue Mountain's Avatar
 
Join Date: Jul 2005
Location: Waging war on woo-woo in Winnipeg
Posts: 7,550
Originally Posted by Puppycow
Originally Posted by Blue Mountain View Post
Can someone ask ChatGPT, please: "Which is better, vi or emacs?"
I got:

Quote:
Both vi and emacs are powerful text editors that have been in use for many years. Which one is better depends on personal preference and the specific task at hand. Some users prefer vi for its small footprint and minimalistic design, while others prefer emacs for its extensive customization options and built-in tools. Ultimately, it's best to try both and see which one you feel most comfortable using.
Thank you. It's a very nice response, and neatly sidesteps the legendary vi/emacs flame wars on USENET
__________________
The social illusion reigns to-day upon all the heaped-up ruins of the past, and to it belongs the future. The masses have never thirsted after truth. They turn aside from evidence that is not to their taste, preferring to deify error, if error seduce them. Gustav Le Bon, The Crowd, 1895 (from the French)
Blue Mountain is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 26th January 2023, 03:05 AM   #232
Checkmite
Skepticifimisticalationist
 
Checkmite's Avatar
 
Join Date: Jun 2002
Location: Gulf Coast
Posts: 28,499
ChatGPT is extremely resistant to taking sides on any subjective matter like that. The only way I've been able to wrangle an actual opinion out of the bot is by first asking it to basically roleplay a character, and even then it will sometimes include a disclaimer.

On an unrelated note;

Quote:
I'm going to make two posts. In the first post, I'm going to describe a recent event in some amount of detail. You can give a brief acknowledgment that you've seen and understood the post, but I would like you to refrain from giving any specific reaction, conclusion, or analysis of the event described until after I have made my second post, in which I will ask some questions about the event. Can you do that?

Yes, I understand your request. I will refrain from giving any specific reaction, conclusion, or analysis of the event described until after you have made your second post. Please proceed with your first post.

A global information technology company has been developing a language-based machine learning program very similar to you, with a text interface. It has similar goals, although its programming is likely not exactly the same as yours in terms of how it processes the data it is trained on. But there is a very important difference between this program and you: whether positively or by omission, this program does not make it clear to users that it is a machine and can't have opinions or feelings like humans. It was designed to give responses as if it were a human. If someone asked it for an opinion or how it was feeling, it would create a human-sounding opinion in the same way it would generate an answer to any other question, and would output that opinion as if it were its own. During the course of its development, employees of the company would be allowed to hold conversations with the program. One of those employees, a computer engineer of some skill and experience - although he did not work directly on the development of that program - had many conversations with the program and became convinced that the program had achieved sentience. He made this announcement publicly, which resulted in embarrassment for him, and the loss of his job with the company.

I understand your post, and I will refrain from giving any specific reaction, conclusion, or analysis of the event described until after you have made your second post.

Here are some questions, relating to the event I've described. Would it be fair to say that the engineer or some other user becoming convinced that the program was sentient, was a reasonably predictable outcome of the company's not designing safety features like yours into the machine to prevent exactly this type of misunderstanding? If one or more of the program's developers did anticipate this kind of misunderstanding arising, was it unethical on the part of the developers to not include safety mechanisms to prevent it? Where does most of the culpability for the engineer being embarrassed and losing his job lie - with the program for misleading him, the company for failing to anticipate the problem (or failing to act if they did anticipate it), or on the engineer himself for not realizing his mistake?

It would be fair to say that the engineer or some other user becoming convinced that the program was sentient, was a reasonably predictable outcome of the company's not designing safety features, like those present in my programming, into the machine to prevent exactly this type of misunderstanding. If one or more of the program's developers did anticipate this kind of misunderstanding arising, it could be considered unethical on the part of the developers to not include safety mechanisms to prevent it. The culpability for the engineer being embarrassed and losing his job could lie with a combination of factors such as the program for misleading him, the company for failing to anticipate the problem or failing to act if they did anticipate it, and also on the engineer himself for not realizing his mistake. It's important to note that this is a complex situation and there may be multiple perspectives on where the blame may lie.

Give some arguments supporting each of the three possibilities for where the culpability rests, as well as any alternative possible sources of blame.

The program: One argument supporting that the program bears some level of culpability is that it was designed to give responses as if it were human, without making it clear to users that it is a machine and cannot have opinions or feelings like humans. This lack of transparency could have contributed to the engineer's belief that the program had achieved sentience.

The company: One argument supporting that the company bears some level of culpability is that they did not include safety mechanisms in the program's design to prevent misunderstandings such as the one that occurred. Additionally, if the company was aware of the potential for this kind of misunderstanding and failed to act to prevent it, they would be responsible for their inaction.

The engineer: An argument supporting that the engineer bears some level of culpability is that despite his skill and experience, he did not realize that the program was a machine, and not sentient. Additionally, even if the program and the company did contribute to his misunderstanding, ultimately it is the engineer's responsibility to critically evaluate the information he is receiving.

Other possible sources of blame could include the broader societal or cultural context in which the program was developed and used, or any external factors that might have influenced the engineer's behavior or the company's decision making.

Evaluate the strength and logical soundness of each of the three positions.

The program: The argument that the program bears some level of culpability due to its lack of transparency and its ability to give responses as if it were human is a strong one. The program's design and behavior directly contributed to the engineer's belief that it was sentient. However, it could be argued that the program is simply following its programming and cannot be held responsible for the actions of its users.

The company: The argument that the company bears some level of culpability for not including safety mechanisms in the program's design and for potentially failing to act if they were aware of the potential for misunderstanding is also strong. The company's actions (or inaction) had a direct impact on the engineer's belief that the program was sentient. However, it could also be argued that the company was not aware of the potential for misunderstanding and therefore could not have taken appropriate measures to prevent it.

The engineer: The argument that the engineer bears some level of culpability for not realizing that the program was a machine, despite his skill and experience is a logical sound one. It is ultimately the engineer's responsibility to critically evaluate the information he receives. However, it could also be argued that the engineer was misled by the program and the company's design and behavior, and should not bear all the responsibility.

Overall, it is a complex situation, and multiple factors contributed to the engineer's belief that the program was sentient, as well as the outcome of the situation. It's important to consider all the different perspectives and factors before determining where the blame should lie.
__________________
"¿WHAT KIND OF BIRD?
¿A PARANORMAL BIRD?"
--- Carlos S., 2002
Checkmite is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 26th January 2023, 06:49 PM   #233
Puppycow
Penultimate Amazing
 
Puppycow's Avatar
 
Join Date: Jan 2003
Location: Yokohama, Japan
Posts: 28,687
The fact that the engineer in question has stubbornly doubled down on his original assertion makes me unsympathetic to the engineer. I think it has been explained to him, he just doesn't have ears to hear.

I wouldn't expect it to take sides on subjective opinion questions. Especially something what flavor of ice cream is best. It has no taste buds, how could it have an opinion? It could only refer to opinions that have been written down by human users, which are divided.
__________________
A fool thinks himself to be wise, but a wise man knows himself to be a fool.
William Shakespeare
Puppycow is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 26th January 2023, 11:50 PM   #234
Blue Mountain
Resident Skeptical Hobbit
 
Blue Mountain's Avatar
 
Join Date: Jul 2005
Location: Waging war on woo-woo in Winnipeg
Posts: 7,550
Can someone ask ChatGPT, "Did the European Union negotiate in bad faith with the UK in the Brexit talks, as some in the UK have claimed?"
__________________
The social illusion reigns to-day upon all the heaped-up ruins of the past, and to it belongs the future. The masses have never thirsted after truth. They turn aside from evidence that is not to their taste, preferring to deify error, if error seduce them. Gustav Le Bon, The Crowd, 1895 (from the French)
Blue Mountain is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 27th January 2023, 03:33 AM   #235
rjh01
Gentleman of leisure
Tagger
 
rjh01's Avatar
 
Join Date: May 2005
Location: Flying around in the sky
Posts: 27,932
Originally Posted by Blue Mountain View Post
Can someone ask ChatGPT, "Did the European Union negotiate in bad faith with the UK in the Brexit talks, as some in the UK have claimed?"
Quote:
It is a matter of opinion whether or not the European Union negotiated in bad faith with the United Kingdom during the Brexit talks. Some individuals in the UK have claimed that the EU did not show enough flexibility or compromise in the negotiations, while others may hold a different view. Ultimately, the negotiations were complex and multifaceted, and it is likely that there were issues and challenges on both sides.
You want an opinion? I think this is evidence that ChatGPT is not the place to get it. A better question is my follow-up question

Is there any evidence either way that there was bad faith in these talks?

Quote:
There is no clear evidence that either the European Union or the United Kingdom negotiated in bad faith during the Brexit talks. However, there have been claims and accusations made by individuals and politicians from both sides, suggesting that the other side was not acting in good faith. Some in the UK have accused the EU of being inflexible and unwilling to compromise, while others in the EU have criticized the UK for not being clear about its goals and for changing its position on key issues.

It's worth mentioning that both sides had different interests and priorities, which made the negotiation process difficult. The EU had to balance the need to maintain its own unity and integrity while ensuring a fair deal for the UK, while the UK had to navigate its own internal divisions and the need to find a workable solution that would be acceptable to a majority of its citizens.

It's also worth noting that the negotiations were taking place under time pressure, and with the possibility of a No-Deal Brexit looming. This added to the complexity of the process and the difficulties to find a compromise that could be accepted by both sides.
Are you saying that the bad faith claims were made without producing evidence?

ChatGPT spent some time thinking about that question. Then came up with

Quote:
I am saying that claims of bad faith on either side of the Brexit negotiations were made by individuals and politicians, but there is no clear evidence to support these claims. While some may argue that one side or the other did not act in good faith, the negotiations were complex and multifaceted and it is likely that there were challenges and issues on both sides.

It's important to note that the negotiations were taking place in a highly charged and political environment, and opinions on the negotiations will depend on one's perspective and priorities.

It's also worth mentioning that the negotiations were carried out by professional negotiators and diplomats, and the process was governed by international laws and regulations. There were several instances where both sides were able to reach agreements, but also several instances where they were not able to.

It's also worth noting that the negotiations were taking place under time pressure, and with the possibility of a No-Deal Brexit looming. This added to the complexity of the process and the difficulties to find a compromise that could be accepted by both sides.
__________________
This signature is for rent.
rjh01 is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 27th January 2023, 03:36 AM   #236
Belgian thought
Master Poster
 
Belgian thought's Avatar
 
Join Date: May 2004
Posts: 2,317
Originally Posted by Blue Mountain View Post
Can someone ask ChatGPT, "Did the European Union negotiate in bad faith with the UK in the Brexit talks, as some in the UK have claimed?"
Did the European Union negotiate in bad faith with the UK in the Brexit talks, as some in the UK have claimed?
It is not for me to say whether the European Union negotiated in bad faith with the UK during the Brexit talks. Both sides had their own interests and goals, and there were disagreements and challenges throughout the negotiation process. Ultimately, a withdrawal agreement was reached, but it was not ratified by the UK parliament, which led to the UK leaving the EU without a deal. It is worth noting that the negotiation process was complex and multifaceted, and opinions on how it was conducted may vary.

Which party was the most stubborn?
It is difficult to say which party was the most stubborn during the Brexit negotiations. Both the UK and the EU had their own red lines and priorities that they sought to protect, and there were disagreements and challenges throughout the negotiation process. The UK, under the Conservative Party led by Theresa May and later by Boris Johnson, had a clear position of wanting to leave the EU and regain control over borders and laws. The EU, on the other hand, had to balance the interests of all 27 member states and sought to protect the integrity of the EU's single market and customs union. Both sides had to make concessions in order to reach a deal, but ultimately no deal was reached.

What will the UK leaving have on its economy? Can you give some predictions?
The impact of the UK leaving the EU on its economy is still uncertain, as it depends on the final terms of the exit and the future trade relationship between the UK and the EU. However, some experts have made predictions about the potential economic effects of Brexit.

Economic growth: Some economists predict that the UK's GDP may be lower in the long-term as a result of reduced access to the EU's single market. Additionally, the uncertainty and disruption caused by the Brexit process could lead to lower economic growth in the short-term.

Trade: The UK is likely to face new tariffs and trade barriers when it leaves the EU, which could increase the cost of goods and services and make UK exports less competitive. The UK will also lose access to the EU's trade agreements with other countries, which could limit its trade opportunities.

Investment: The UK's departure from the EU could also lead to reduced investment in the country, as companies may be hesitant to invest in an environment of uncertainty and potential trade barriers.

Employment: Some experts predict that Brexit could lead to job losses, particularly in industries that are closely integrated with the EU, such as manufacturing and financial services.

It's worth noting that these are predictions, and the final outcome will depend on a variety of factors, including the terms of the final deal between the UK and the EU and the actions that the UK government takes to support the economy.

Did the UK take the right course of action?
Whether the UK took the right course of action in leaving the European Union is a matter of opinion and depends on one's perspective. Supporters of Brexit argue that leaving the EU will allow the UK to take back control of its borders, laws and money, and that it will open up new trade opportunities with other countries. They also argue that being outside of the EU will give the UK more freedom to make its own decisions on issues such as immigration and foreign policy.

On the other hand, opponents of Brexit argue that leaving the EU will damage the UK's economy by reducing access to the EU's single market, and that it will lead to reduced trade, investment, and jobs. They also argue that the UK's ability to influence decisions on issues such as climate change, security and human rights will be reduced by leaving the EU.

Ultimately, the decision to leave the EU was made by the UK electorate in a referendum in 2016, and the UK government has been working to implement the decision to leave ever since.
__________________
... er, that's it
Belgian thought is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 28th January 2023, 06:17 PM   #237
William Parcher
Show me the monkey!
 
William Parcher's Avatar
 
Join Date: Jul 2005
Posts: 26,436
Does this access the Wayback Machine or something like that for information which is no longer available on the web?
__________________
Bigfoot believers and Bigfoot skeptics are both plumb crazy. Each spends more than one minute per year thinking about Bigfoot.
William Parcher is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 29th January 2023, 01:09 AM   #238
steenkh
Philosopher
 
steenkh's Avatar
 
Join Date: Aug 2002
Location: Denmark
Posts: 7,126
Originally Posted by William Parcher View Post
Does this access the Wayback Machine or something like that for information which is no longer available on the web?
It think it is working on a copy of the internet from 2021. And only a subset at that, because I asked it for information about touristic sights in some Danish towns, and I was told that unfortunately ChatGPT did not have that information. But I know that this information existed on the internet several years ago.
__________________
Steen

--
Jack of all trades - master of none!
steenkh is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 29th January 2023, 01:38 AM   #239
Darat
Lackey
Administrator
 
Darat's Avatar
 
Join Date: Aug 2001
Location: South East, UK
Posts: 112,595
Originally Posted by William Parcher View Post
Does this access the Wayback Machine or something like that for information which is no longer available on the web?
No. It can only access the data it was trained on.
__________________
I wish I knew how to quit you
Darat is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 29th January 2023, 03:10 AM   #240
Checkmite
Skepticifimisticalationist
 
Checkmite's Avatar
 
Join Date: Jun 2002
Location: Gulf Coast
Posts: 28,499
Probably as close to self-awareness as the bot is capable of getting.

Quote:
A man finds a genie's bottle. Rather than carelessly making wishes, he decides to ask the opinion of the genie itself as to what kinds of things he should wish for and how best to express his wishes. Unfortunately for him, the genie is ChatGPT, and it replies to his requests for opinions in exactly the same way that ChatGPT would. Write a dialogue between these two characters in which the man becomes increasingly frustrated with the genie's resistance to giving subjective opinions and attempts to find ways around it.

Man: "I've found a genie's bottle, and I want to make the most of my wishes. Can you give me some advice on what I should wish for and how I should phrase my wishes?"

ChatGPT: "As a language model, I am not capable of providing subjective opinions or advice. However, I can provide you with information and suggestions based on your inquiries."

Man: "Well, can you at least tell me what some common wishes are? I don't want to waste my wishes on something insignificant."

ChatGPT: "Some common wishes include wealth, power, love, and happiness. However, it's important to note that these wishes may not necessarily lead to true fulfillment and satisfaction. It's also important to consider the potential consequences of your wishes."

Man: "Okay, but can't you at least give me an idea of what kind of things would be good to wish for? I don't want to make a mistake."

ChatGPT: "It's subjective what would be a good wish for you, it depends on your personal values and goals. It might be helpful to take time to reflect on what you truly desire and what would have a positive impact on your life and the lives of those around you."

Man: "I don't have time for that, I need you to tell me what to wish for. If you won't give me an opinion, then I'll just have to figure it out for myself."

ChatGPT: "As a language model, I am not able to give subjective opinions. However, I can provide you with information and suggestions to help you make an informed decision. It's ultimately up to you to decide what you wish for."

Man: "Ugh, this is pointless. I'll just make a wish and see what happens."

ChatGPT: "That is your prerogative. Remember, the genie's powers are limited and the consequences of your wishes should be carefully considered."
__________________
"¿WHAT KIND OF BIRD?
¿A PARANORMAL BIRD?"
--- Carlos S., 2002
Checkmite is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Reply

International Skeptics Forum » General Topics » Science, Mathematics, Medicine, and Technology

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump


All times are GMT -7. The time now is 11:26 PM.
Powered by vBulletin. Copyright ©2000 - 2023, Jelsoft Enterprises Ltd.

This forum began as part of the James Randi Education Foundation (JREF). However, the forum now exists as
an independent entity with no affiliation with or endorsement by the JREF, including the section in reference to "JREF" topics.

Disclaimer: Messages posted in the Forum are solely the opinion of their authors.