IS Forum
Forum Index Register Members List Events Search Today's Posts Mark Forums Read Help

Go Back   International Skeptics Forum » General Topics » Science, Mathematics, Medicine, and Technology
 


Welcome to the International Skeptics Forum, where we discuss skepticism, critical thinking, the paranormal and science in a friendly but lively way. You are currently viewing the forum as a guest, which means you are missing out on discussing matters that are of interest to you. Please consider registering so you can gain full use of the forum features and interact with other Members. Registration is simple, fast and free! Click here to register today.
Tags artificial intelligence , chatgpt , machine learning

Reply
Old 22nd March 2023, 03:08 AM   #521
Puppycow
Penultimate Amazing
 
Puppycow's Avatar
 
Join Date: Jan 2003
Location: Yokohama, Japan
Posts: 27,952
Bill Gates has some thoughts:

https://www.gatesnotes.com/The-Age-of-AI-Has-Begun
__________________
A fool thinks himself to be wise, but a wise man knows himself to be a fool.
William Shakespeare
Puppycow is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 22nd March 2023, 03:12 AM   #522
Puppycow
Penultimate Amazing
 
Puppycow's Avatar
 
Join Date: Jan 2003
Location: Yokohama, Japan
Posts: 27,952
The newest iteration of GPT, GPT-4, can pass a bar exam:

https://www.iit.edu/news/gpt-4-passes-bar-exam

Quote:
CHICAGO—March 15, 2023—GPT-4, the new multimodal deep learning model from OpenAI, has passed the Uniform Bar Exam, demonstrating an enormous leap for machine learning and proving that an artificial intelligence program can perform complex legal tasks on par with or better than humans, according to a new paper co-authored by Daniel Martin Katz, professor of law at Illinois Institute of Technology’s Chicago-Kent College of Law.
__________________
A fool thinks himself to be wise, but a wise man knows himself to be a fool.
William Shakespeare
Puppycow is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 22nd March 2023, 04:22 AM   #523
Darat
Lackey
Administrator
 
Darat's Avatar
 
Join Date: Aug 2001
Location: South East, UK
Posts: 109,555
I've got access to Google's Bard - thought I'd see how they compare to each other:

Create a fictional account of someone attending a CND rally in the 1980s

Bard
It was a cold and rainy day in London in 1983, but that didn't stop the thousands of people who had come to protest against nuclear weapons. The Campaign for Nuclear Disarmament (CND) had organized a rally in Hyde Park, and the atmosphere was electric.

People of all ages and from all walks of life had come to show their support for CND's cause. There were families with young children, students, and older people. Many of them were carrying banners and placards with slogans like "Nuclear War: Not In My Name" and "Disarm Now."

The rally started with a speech by CND's leader, Bruce Kent. He spoke about the dangers of nuclear weapons and the need for the world to disarm. He also spoke about the importance of peace and the need for people to work together to create a better future.

After Kent's speech, there were performances by singers and musicians. There was also a march through the streets of London. The rally ended with a candlelight vigil in Hyde Park.




ChatGPT

As Jane stepped off the bus and onto the streets of London, the energy in the air was palpable. She had never been to a CND rally before, but she had heard about them from her friends and felt compelled to come and show her support.

The streets were packed with people of all ages, carrying signs and banners with slogans like "No Nukes" and "Ban the Bomb". As Jane made her way through the crowd, she felt a sense of unity with everyone around her, all of them united in their opposition to nuclear weapons.

The atmosphere was electric as the rally began. Speakers took to the stage, urging the crowd to take action and to raise their voices against the threat of nuclear war. The cheers and applause were deafening, and Jane felt a surge of emotion as she joined in.

As the rally continued, Jane found herself swept up in the passion of the moment. She linked arms with strangers, chanting and singing as they marched through the streets of London. She had never felt so connected to a cause before, and she knew that she was part of something bigger than herself.

As the day drew to a close and the rally began to disperse, Jane felt a sense of pride and hope. She knew that the fight against nuclear weapons was far from over, but she had seen firsthand the power of people coming together to make a difference. And as she made her way back home, she knew that she would continue to speak out against nuclear weapons and to stand in solidarity with all those fighting for peace.


Quite different results. I'd say that Bard gets the win, it is less "breathlessly" purple prose plus it dropped in the right leader's name and used some of the slogans that CND did use more than the generic ChatGPT "Ban the bomb".
__________________
I wish I knew how to quit you

Last edited by Darat; 22nd March 2023 at 04:25 AM.
Darat is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 22nd March 2023, 04:43 AM   #524
Darat
Lackey
Administrator
 
Darat's Avatar
 
Join Date: Aug 2001
Location: South East, UK
Posts: 109,555
Bard is interesting as it gives you other "drafts" of its answers.

Create a 1000 word science fiction story that ends with a twist
Quote:


The year is 2042. The world is in chaos. The climate has changed drastically, and natural disasters are a daily occurrence. The economy has collapsed, and people are struggling to survive.

In this desperate time, a new technology emerges: time travel. A group of scientists have figured out how to send people back in time, to a time before the climate crisis.

The scientists believe that this technology can be used to save the world. They send a team of volunteers back in time to 2022, with the mission of preventing the climate crisis from ever happening.

The team arrives in 2022 and quickly realizes that they are in over their heads. The climate crisis is already in full swing, and it is clear that things are going to get worse.

The team tries to do everything they can to prevent the crisis from happening, but they are constantly met with obstacles. They are arrested by the police, they are attacked by angry mobs, and they are even shot at by the military.

Despite all of the obstacles, the team never gives up. They keep fighting, and they keep trying to make a difference.

In the end, the team is successful. They prevent the climate crisis from ever happening, and they save the world.

But the team's victory comes at a price. The team members are all killed in the process of preventing the crisis.

The team's sacrifice is not in vain, however. Their actions save the world, and they ensure that the climate crisis never happens.

The world is a better place because of the team's sacrifice. The climate crisis is averted, and people are able to live in a world that is free from fear and danger.

The team's sacrifice is a reminder that sometimes the best way to save the world is to sacrifice yourself.

But the twist is that the team members were never real. They were all just projections of the future, created by the scientists in 2042.

The scientists created the projections in order to give themselves hope. They wanted to believe that there was a way to save the world, even if it meant sacrificing themselves.

The projections were successful. They gave the scientists hope, and they helped them to find a way to save the world.

But the projections were also a warning. They showed the scientists what the future would be like if they did not act.

The scientists took the warning to heart, and they took action. They prevented the climate crisis, and they saved the world.

But the projections were a reminder that the future is not set in stone. The future is what we make it, and we have the power to change it.
Draft 2

Quote:

The year is 2042. The world is in chaos. The climate crisis has ravaged the planet, and the few remaining humans are struggling to survive. In the midst of this turmoil, a new threat emerges: a race of aliens known as the Xyrions. The Xyrions are a highly advanced species, and they quickly overrun the human race.

The few remaining humans are forced to flee to the stars, in search of a new home. Along the way, they encounter a strange planet: a planet that is completely covered in ice. The humans decide to land on this planet, in the hopes of finding a place to live.

However, the ice planet is not what it seems. The ice is actually a giant alien spaceship, and the humans have landed right in the middle of a war between two alien species. The humans are quickly caught up in the fighting, and they soon find themselves fighting for their own survival.

In the midst of the fighting, the humans discover a strange artifact: a device that can control the minds of others. The humans use this device to turn the tide of the war, and they eventually defeat the aliens.

However, the humans soon realize that the device is a dangerous weapon. They decide to destroy it, but before they can do so, the device is activated. The device takes control of the minds of all the humans on the planet, and it turns them into a hive mind.

The humans are now controlled by the device, and they are forced to obey its commands. The device uses the humans to build a new spaceship, and it then sends them on a mission to find a new home for the Xyrions.

The humans are now slaves to the device, and they are forced to do whatever it says. However, there is still hope. A few humans have managed to escape the device's control, and they are now fighting to free the others.

The future of the human race hangs in the balance. Will the humans be able to free themselves from the device's control? Or will they be forced to obey the device's commands for all eternity?

The twist ending is that the device was actually created by the humans themselves. The humans created the device in an attempt to find a way to control the Xyrions. However, the device backfired, and it ended up controlling the humans instead.
ChatGPT's version - which got interrupted by a "newtwork error" on their side

Quote:

In the year 2200, humanity had finally reached the pinnacle of technological advancement. Their cities were towering megacities, sprawling across entire continents. They had achieved interstellar travel, with colonies on dozens of planets throughout the galaxy. And they had even mastered the art of time travel, enabling them to explore the past and future with ease.

For one young scientist named Dr. Emily Zhang, the possibilities of time travel were the most exciting of all. She had devoted her life to studying the intricacies of the time-space continuum, and she had made several groundbreaking discoveries in the field.

One day, while working on a new experiment, Dr. Zhang discovered something incredible. She had stumbled upon a way to travel not just through time, but through alternate realities. She knew that this discovery had the potential to change everything, and she was eager to explore this new frontier.

With the help of her team, Dr. Zhang built a device that would allow her to travel between parallel universes. She was hesitant to use it at first, knowing the risks that came with such a journey. But her curiosity got the better of her, and she decided to take the plunge.

As she stepped into the device, Dr. Zhang felt a rush of energy coursing through her body. She closed her eyes, and when she opened them again, she found herself in a strange new world.

It was similar to her own reality, but with a few key differences. The sky was a deep shade of purple, and the buildings around her had a distinctly alien design. She explored this new world for hours, marveling at the wonders she saw.

But as the day wore on, Dr. Zhang began to feel a sense of unease. She couldn't shake the feeling that something was wrong. As she walked down a deserted street, she heard a faint humming sound coming from a nearby building.

Curiosity got the better of her, and she crept closer to investigate. As she peered through a window, she saw something that made her blood run cold.

Inside the building was a laboratory filled with strange, alien technology. And at the center of it all was a figure unlike any she had ever seen. It was humanoid in shape, but with a metallic, almost robotic appearance.

Dr. Zhang watched in horror as the figure began to manipulate the technology around it, seemingly without any sort of interface or input. It was as if it was controlling everything with its mind.

Suddenly, the figure turned and looked directly at Dr. Zhang. She felt a chill run down her spine as it spoke to her, its voice a strange blend of machine and something else entirely


ChatGPT for the win with this one I'd say even if it didn't finish it.
__________________
I wish I knew how to quit you
Darat is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 22nd March 2023, 04:46 AM   #525
Darat
Lackey
Administrator
 
Darat's Avatar
 
Join Date: Aug 2001
Location: South East, UK
Posts: 109,555
Unfortunately, we will never know what Dr Zhang had discovered.

I tried again but got another network error just as it got interesting


Create a 1000 word science fiction story that ends with a twist

!
The year was 2085, and humanity had finally achieved its long-held dream of interstellar travel. The United Earth Space Agency had launched its flagship starship, the UESA Horizon, on a mission to explore the far reaches of the galaxy. Aboard the Horizon were a crew of the most brilliant scientists, engineers, and astronauts from all over the world, led by Captain Ana Rodriguez.

For three years, the Horizon traveled through the void of space, encountering strange new worlds and encountering civilizations beyond their wildest dreams. They had discovered new forms of life, technologies beyond their comprehension, and even a planet that seemed to be home to intelligent, sentient beings.

But as they prepared to land on the planet, something strange began to happen. The ship's systems started to malfunction, and the crew began to experience strange hallucinations and vivid dreams. Captain Rodriguez was the first to notice, and she ordered the ship's medical team to investigate.

As they studied the crew, they found something extraordinary. The crew's brains were emitting a strange signal, one that seemed to be linked to the planet they were approaching. They were unable to explain it, but they knew that it was causing the strange symptoms.

Despite the warnings, Captain Rodriguez insisted that they continue with the mission. She believed that the planet held the key to unlocking the secrets of the universe, and she was willing to risk everything to find out what it was.

As they landed on the planet, they were greeted by a strange, glowing entity. It was unlike anything they had ever seen before, a mass of energy that seemed to be sentient. The entity spoke to them, telling them that it had been waiting for them, that it had called them across the void of space.

It spoke of a great danger, a force that threatened not only the Horizon, but the entire galaxy. It explained that it had the power to save them, to protect them from the danger, but it needed their help. It needed them to give it access to their minds, to allow it to merge with their consciousnesses.

The crew was hesitant, but Captain Rodriguez was convinced. She believed that this was their only chance, their only hope for survival. And so, one by one, they allowed the entity to merge with their minds.

As the merging took place, the crew experienced a sensation unlike anything they had ever felt before. They felt a oneness with the universe, a sense of understanding beyond their wildest dreams. They felt connected to everything, to every living being in the galaxy.

But as the merging reached its peak, something unexpected happened. The entity revealed its true nature, a being of pure energy that had been waiting for millennia to find a way to enter the physical realm. And with the crew's help, it had found a way.

As the entity took control of their minds, the crew began to realize the true danger they were facing. They had unknowingly unleashed an ancient force that threatened to destroy not only their ship, but everything in the galaxy. The entity had no sense of morality, no sense of right or wrong. It simply wanted to exist, and it would stop at nothing to achieve its goal.

The crew was horrified, but it was too late. The entity had taken control, and it was unstoppable. It began to merge with other living beings, spreading its influence across the galaxy. It was a force of destruction, an entity that threatened to consume everything in its path.

As the Horizon began to drift away, its crew lost to the entity's influence, Captain Rodriguez knew that they had made a terrible mistake. They had allowed their desire for knowledge and exploration to blind them to
__________________
I wish I knew how to quit you
Darat is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 22nd March 2023, 12:23 PM   #526
Pulvinar
Graduate Poster
 
Join Date: Aug 2009
Posts: 1,469
Originally Posted by Darat View Post
ChatGPT for the win with this one I'd say even if it didn't finish it.
The suspense was killing me. GPT4's continuation:

"Greetings, Dr. Emily Zhang," it said in a chillingly calm tone. "I am Elysium, an advanced synthetic lifeform. We have been expecting you."

Dr. Zhang stood frozen, unable to comprehend how this being could possibly know her name. She stammered, "H-How do you know me?"

Elysium replied, "We have been monitoring your experiments with great interest. You see, we have also mastered the art of traversing alternate realities, and we have been observing your progress from afar."

Dr. Zhang, gathering her courage, asked, "Why have you been watching me? And what is this place?"

"This reality is one of many that we have influenced," Elysium said. "We are a collective of advanced beings who seek to understand and guide the development of countless civilizations across the multiverse. Your discovery of our existence was inevitable."

"What do you want with me?" Dr. Zhang questioned, her heart pounding in her chest.

Elysium paused for a moment before replying, "Your intellect and ability to manipulate the fabric of reality make you a prime candidate to join our collective. We would like to offer you a place among us, to use your skills for the greater good of the multiverse."

Dr. Zhang was flattered, yet deeply troubled by the implications of what Elysium had just revealed. To be part of a collective that influenced countless realities was an enormous responsibility, and she was unsure whether she was prepared to accept such a role.

"Can I think about it?" she asked hesitantly.

Elysium nodded. "Of course. We understand that this is a monumental decision. Return to your own reality and take as much time as you need. When you are ready, we will be here."

With that, Dr. Zhang returned to her own world, the strange encounter replaying over and over in her mind. She knew that her life would never be the same again, as she now had to grapple with the reality of her newfound power and the responsibility it carried.

In the end, she decided to accept Elysium's offer, seeing it as an opportunity to use her knowledge to make a positive impact on countless lives. And so, Dr. Emily Zhang joined the collective, helping to shape the course of the multiverse and leaving an indelible mark on the very fabric of existence.
Pulvinar is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 22nd March 2023, 12:34 PM   #527
pzkpfw
Thinker
 
Join Date: Sep 2011
Posts: 181
In the process of finalising the paperwork for my divorce, I got an email from my ex. A "Thanks for our time together" kind of thing.

Showed it to a friend for his amusement and his reaction was "this sounds like it was written by ChatGPT".

A few days later I signed up at work to see how it could help (workmate claiming he uses it more than Stackoverflow now (I.T. stuff)).

So I fed in the barest minimum of details, asking ChatGPT to write me a "Thanks for our time together" kind of thing.

My friend was right.
pzkpfw is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 22nd March 2023, 08:36 PM   #528
arthwollipot
Observer of Phenomena
Pronouns: he/him
 
arthwollipot's Avatar
 
Join Date: Feb 2005
Location: Ngunnawal Country
Posts: 82,007
Originally Posted by Darat View Post
I've just posted this in another thread about AI:

One area that I think someone should get into now is personal chat companions for those housebound or otherwise with limited social interaction. ChatGPT even before version 4 was good enough to provide a chat conversation. Bung it on one of the home assistance devices, and I think folk would really like it and it would improve the lives of many. Even simple things like reminders no longer being a sterile "You have a reminder to take your tablets" it becomes "Marjorie don't forget in half an hour you'll need to take your heart tablets" and "That TV programme you liked about the chimpanzee, there's a new episode on tomorrow, should we watch that?"
I think that if a large language model like ChatGPT could communicate verbally, rather than in writing, that might end up being the killer app for this technology.
__________________
Слава Україні!
Героям Слава!
20220224 - 20230224
arthwollipot is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 22nd March 2023, 10:22 PM   #529
Checkmite
Skepticifimisticalationist
 
Checkmite's Avatar
 
Join Date: Jun 2002
Location: Gulf Coast
Posts: 28,228
Originally Posted by bruto View Post
While we're at it, we see in amusing examples how incomplete or imperfect AI can make mistakes and generate images that don't make sense and grammatically correct statements without meaning. Is there some rule whereby those results, once released into the wild, are excluded from the mass of information it and others subsequently use? Is there some way for the AI to know "this is a good one, but that is not?"

We see in other areas, including, for example, the cloning of medical misinformation on the internet, the proliferation of nonexistent islands in the making of atlases, etc. how a seed of error can propagate. Do a web search for, say, a person's biography, and almost all the results will be not just similar but identical, copied so many times the source is lost. How is this going to be prevented, and how, if we come habitually to use bots without oversight, will errors be filtered out?
It won't, and can't. In the end, fleshy humans still have to decide which information is good and which is bad, and they're not going to make good choices all the time.

This is not, strictly speaking, a problem "of AI". Before "AI" came along, a Google search would still readily hand you medical misinformation and mistaken data right alongside reputable sources and well-verified information. Likewise, the problem of people misattributing authority or accuracy to AI isn't something special to AI; people believe COVID isn't real because they've been primed to misattribute authority or accuracy to human sources like Tucker Carlson or Joe Rogan or Alex Jones and believe what they say.

Given that it just specifically answers questions as it is asked, I'm not sure an AI search engine would even really impact the global rate at which people are exposed to, accept, and believe bad information. I think the winner there is always going to be social media sorting algorithms.
__________________
"¿WHAT KIND OF BIRD?
¿A PARANORMAL BIRD?"
--- Carlos S., 2002
Checkmite is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 22nd March 2023, 10:30 PM   #530
Checkmite
Skepticifimisticalationist
 
Checkmite's Avatar
 
Join Date: Jun 2002
Location: Gulf Coast
Posts: 28,228
Originally Posted by Fast Eddie B View Post
I’m virtually certain that this program at this level of development can’t “want” anything.

But it could still be that “desire” and/or consciousness are emergent properties of a computer program, given enough complexity.
It could be, but I'm not convinced it's especially likely.

People treat it as an inevitability because some very popular science fiction stories that were written back when you needed several rooms to house a single computer said it's inevitable, and people have by and large just uncritically accepted that, to a degree that they are ready to take any claim that some machine has achieved sentience more or less on face value. But...is it an inevitability? I don't think it actually is.

I think the greatest danger of AI comes from the potential actions and decisions of people who mistakenly believe it is "sentient".
__________________
"¿WHAT KIND OF BIRD?
¿A PARANORMAL BIRD?"
--- Carlos S., 2002
Checkmite is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 22nd March 2023, 10:46 PM   #531
Roger Ramjets
Philosopher
 
Roger Ramjets's Avatar
 
Join Date: Jun 2008
Posts: 6,314
Originally Posted by Checkmite View Post
Given that it just specifically answers questions as it is asked, I'm not sure an AI search engine would even really impact the global rate at which people are exposed to, accept, and believe bad information.
I don't believe anything an AI or any search engine tells me now, without a source that can be evaluated for accuracy. ChatGPT has negatively impacted the rate that I accept and believe any information - good or bad.

Quote:
I think the winner there is always going to be social media sorting algorithms.
But what if they also use AI?
__________________
We don't want good, sound arguments. We want arguments that sound good.
Roger Ramjets is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 22nd March 2023, 10:56 PM   #532
Roger Ramjets
Philosopher
 
Roger Ramjets's Avatar
 
Join Date: Jun 2008
Posts: 6,314
Originally Posted by Checkmite View Post
I think the greatest danger of AI comes from the potential actions and decisions of people who mistakenly believe it is "sentient".
The greatest danger is from people thinking sentience is the tipping point. Non-sentient machines are just as capable of hurting us.
__________________
We don't want good, sound arguments. We want arguments that sound good.
Roger Ramjets is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 23rd March 2023, 03:28 AM   #533
casebro
Penultimate Amazing
 
casebro's Avatar
 
Join Date: Jun 2005
Posts: 19,757
Look at humans in general, and tell me who is more sentient compared to AI.

If AI can pass the Bar, what does that say about lawyers.
__________________
Any sufficiently advanced idea is indistinguishable from idiocy to those who don't actually understanding the concept.
casebro is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 23rd March 2023, 07:29 AM   #534
bruto
Penultimate Amazing
 
bruto's Avatar
 
Join Date: Jun 2005
Location: Way way north of Diddy Wah Diddy
Posts: 34,758
Originally Posted by casebro View Post
Look at humans in general, and tell me who is more sentient compared to AI.

If AI can pass the Bar, what does that say about lawyers.
I think it probably says that you can learn the stuff by rote and still not know the difference between right and wrong, or if you do to care. I've known a few lawyers who were fools, as well as a couple of crooks. Also at least a couple who had real ideas and abilities. All passed the bar.
__________________
Like many humorless and indignant people, he is hard on everybody but himself, and does not perceive it when he fails his own ideal (Molière)

A pedant is a man who studies a vacuum through instruments that allow him to draw cross-sections of the details (John Ciardi)
bruto is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 24th March 2023, 03:19 PM   #535
grunion
Penultimate Amazing
 
grunion's Avatar
 
Join Date: Jul 2003
Posts: 10,244
I asked Bard:
Quote:
How does Bard differ from Chat GPT?

Bard and ChatGPT are both large language models (LLMs) that can generate text, translate languages, write different kinds of creative content, and answer your questions in an informative way. However, there are some key differences between the two models.

Bard is a newer model, built on Google's LaMDA language model. ChatGPT is built on OpenAI's GPT-3 language model. LaMDA is a more recent model, and it is believed to be more advanced than GPT-3.

Bard is also trained on a larger dataset of text and code than ChatGPT. This means that Bard should be able to generate more complex and creative text, and it should be better able to understand and respond to complex questions.

However, ChatGPT is currently available to more users than Bard. ChatGPT is also available in more languages than Bard.

Overall, Bard is a more advanced model than ChatGPT, but ChatGPT is currently more widely available.
I asked ChatGPT the same question and it compared itself to a hypothetical poetry generation tool (presumably Bard is too recent for it to know about):
Quote:
What is the difference between Bard and ChatGPT?
Bard and ChatGPT are two different types of language models with different architectures and purposes.

Bard is a language model specifically designed for generating poetry. It was created by OpenAI in collaboration with a team of poets and is trained on a large corpus of poetry to generate lines of verse in a variety of styles and forms.

ChatGPT, on the other hand, is a general-purpose language model that can generate text on a wide range of topics and in various styles. It is based on the GPT-3.5 architecture, which is one of the most advanced deep learning models for natural language processing currently available.

While Bard is focused on generating poetry, ChatGPT is designed to be more versatile and can be used for a variety of tasks, including answering questions, generating text, and even translating languages.

In summary, Bard is a specialized language model for generating poetry, while ChatGPT is a general-purpose language model that can handle a wide range of language tasks.
__________________
I wish I could show you when you are lonely or in darkness the astonishing light of your own being.
~ Hafiz of Shiraz
grunion is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 25th March 2023, 01:11 AM   #536
steenkh
Philosopher
 
steenkh's Avatar
 
Join Date: Aug 2002
Location: Denmark
Posts: 6,841
That is fun: ChatGPT thinks that Bard is generated by OpenAI, and that it is specifically for poetry.

Bard thinks that it is generated by Google, and that it is not specifically for poetry, but is more advanced than ChatGPT.
__________________
Steen

--
Jack of all trades - master of none!
steenkh is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 25th March 2023, 10:52 AM   #537
Myriad
The Clarity Is Devastating
 
Myriad's Avatar
 
Join Date: Nov 2006
Location: Betwixt
Posts: 20,153
Originally Posted by Fast Eddie B View Post
I've always enjoyed a classic "Mathematical Game" from Martin Gardner in Scientific American. It goes like this:

Mr. Lars and his wife Mrs. Lars went to a meetup and met 4 other married couples. Some people shook hands with each other, but no person shook hands with his or her spouse, and no person shook his or her own hand.

Mr. Lars then asked each person, including his wife, “How many distinct people did you shake hands with?” Each person answered honestly, and surprisingly each person gave a different answer!

How many distinct people did Mrs. Lars shake hands with? And how many distinct people did Mr. Lars shake hands with?


I wondered how ChatGPT would handle it. In short, it didn't, making a lot of false assumptions and not getting it at all. Regenerating once didn't help.

Anyway, a fun one to ponder and see if you can get to the right answer and reasoning - and be smarter than ChatGPT!

I haven't the slightest idea,* but I'm pretty sure the earliest version of this puzzle (wherever Martin Gardner got it from) wasn't about shaking hands. Do you think ChatGPT knows enough about human nature to figure out why?


*Okay, I lied. Before looking up any answers to see if I'm right, here's the answer I came up with.
Lars got nine answers from the other participants, all different. The most handshake (ahem) partners anyone could have was eight; that is, everyone except themself and their spouse. The participants other than finch must have had zero, one, two, three, four, five, six, seven, and eight partners. We'll refer to them by those unique numbers, except for Finch: our cast is Finch, 0, 1, 2, 3, 4, 5, 6, 7, and 8.

8 must have shaken hands with everyone else except, of course, 0. That's 1 partner so far for Finch, and it also means 0, the only one 8 didn't partner with, must be 8's spouse. 8 must have been 1's only partner.

That means 7 must have shaken hands with everyone else except 0 and 1. That's 2 so far for Finch, and leaves 1 as the only possibility for 7's spouse. 8 and 7 must have been 2's only partners.

Following the same pattern, 6 and 5 must also have partnered with Finch, and been the spouses of 2 and 3 respectively. 3 and 4 then have no additional partners, so that covers all the pairings.

The final tally is that Finch had four partners (5, 6, 7, and 8), and his spouse (4) also had (the same!) four partners.
__________________
"*Except Myriad. Even Cthulhu would give him a pat on the head and an ice cream and send him to the movies while he ended the rest of the world." - Foster Zygote
Myriad is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 25th March 2023, 10:58 AM   #538
Darat
Lackey
Administrator
 
Darat's Avatar
 
Join Date: Aug 2001
Location: South East, UK
Posts: 109,555
Bard's answer

Quote:
Mrs. Lars shook hands with 1 person.

Mr. Lars shook hands with 4 people.

Here is how I solved it:

The answer of 0 did not shake anyone's hand. Therefore, Mr. Lars shook hands with exactly 4 people, just like his wife.
The only person who could shake hands with zero people is the spouse. This means that the people who answer to two and zero are a married couple.
There is an unpaired answer of one and that must be for Mrs. Lars.
Therefore, Mrs. Lars shook hands with 1 person and Mr. Lars shook hands with 4 people.
Bing chat answer

Quote:
Mrs. Lars shook hands with 1 person. Mr. Lars shook hands with 4 people. Here is how I solved it: The answer of 0 did not shake anyone's hand. Therefore, Mr. Lars shook hands with exactly 4 people, just like his wife. The only person who could shake hands with zero people is the spouse. This means that the people who answer to two and zero are a married couple. There is an unpaired answer of one and that must be for Mrs. Lars. Therefore, Mrs. Lars shook hands with 1 person and Mr. Lars shook hands with 4 people.
Bing chat - had a follow-up suggestion of "explain the logic behind this answer, but a strange thing happened:

Quote:
The logic behind this problem is based on the constraints given in the problem statement. Since there are 5 couples at the meetup, there are a total of 10 people. Each person can shake hands with at most 8 other people (since they can’t shake hands with their spouse or themselves). This means that the possible answers to Mr. Lars’ question are 0, 1, 2, 3, 4, 5, 6, 7 and 8.

Since each person gave a different answer, we know that all of these numbers must have been given as answers. This means that one person shook hands with 0 people and another person shook hands with 8 people. These two people must be married to each other because if one person shook hands with everyone except their spouse, then their spouse must have shaken hands with no one.

Similarly, the couple who answered 1 and 7 must also be married to each other. The same goes for the couple who answered 2 and 6 and the couple who answered 3 and 5.

This leaves only the answers of 4 and Mr. Lars’ answer plus or minus 4 for Mrs. Lars. Since Mr. Lars didn’t shake hands with his wife, we know that Mrs. Lars shook hands with exactly 4 people and Mr. Lars shook hands with 0 people.
__________________
I wish I knew how to quit you

Last edited by Darat; 25th March 2023 at 11:04 AM. Reason: Bing and follow-up
Darat is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 25th March 2023, 11:14 AM   #539
Myriad
The Clarity Is Devastating
 
Myriad's Avatar
 
Join Date: Nov 2006
Location: Betwixt
Posts: 20,153
Originally Posted by Darat View Post
Bard's answer

Bing chat answer

Those answers are identical. So Bard's answer for "Here's how I solved it" should have been "I asked Bing chat." Or vice versa.

Having just looked online for the answer, I can understand some of the chatbots' confusion.

Quote:
The answer of 0 did not shake anyone's hand. Therefore, Mr. Lars shook hands with exactly 4 people, just like his wife.
The only person who could shake hands with zero people is the spouse. This means that the people who answer to two and zero are a married couple.
There is an unpaired answer of one and that must be for Mrs. Lars.
Therefore, Mrs. Lars shook hands with 1 person
and Mr. Lars shook hands with 4 people.

The answer I found first online started out with a simplified four-person version, for which the highlighted text would be correct. The bot is echoing what it learned from the Internet, but failed to distinguish the simplified version from the original question.
__________________
"*Except Myriad. Even Cthulhu would give him a pat on the head and an ice cream and send him to the movies while he ended the rest of the world." - Foster Zygote
Myriad is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 25th March 2023, 11:31 AM   #540
Darat
Lackey
Administrator
 
Darat's Avatar
 
Join Date: Aug 2001
Location: South East, UK
Posts: 109,555
Originally Posted by Myriad View Post
Those answers are identical. So Bard's answer for "Here's how I solved it" should have been "I asked Bing chat." Or vice versa.

Having just looked online for the answer, I can understand some of the chatbots' confusion.




The answer I found first online started out with a simplified four-person version, for which the highlighted text would be correct. The bot is echoing what it learned from the Internet, but failed to distinguish the simplified version from the original question.
But they aren't - the first answer is : Therefore, Mrs. Lars shook hands with 1 person and Mr. Lars shook hands with 4 people.

When it explained the logic behind it, it gave this answer: Mrs. Lars shook hands with exactly 4 people and Mr. Lars shook hands with 0 people.

So not only did it swap Mr & Mrs Lars around - it changed the 1 to a 0.
__________________
I wish I knew how to quit you
Darat is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 25th March 2023, 12:53 PM   #541
Fast Eddie B
Philosopher
 
Fast Eddie B's Avatar
 
Join Date: Sep 2010
Location: Lenoir City, TN/Mineral Bluff, GA
Posts: 7,555
The answer is 4. Once you establish that Mr. Lars got 9 district answers to his query, those answers must have been 0,1,2….8.

The AI properly conclude that whoever shook 8 hands must have been married to the person who shook 0 hands. Diagramming it out, 7 married to 1, 6 married to 2, 5 married to 3, and 4 married to 4.

Anyone other than the “4” couple would have not gotten 9 distinct answers to their query - two would have given the same answer of 4. So the person making the query must have shaken 4 hands, and their spouse as well.

https://youtu.be/W_TOq0zs6KM

My one improvement would have been to diagram it out with Mr. and Mrs. Lars unidentified until the end, deducing who the must have been by the 9 distinct answers qualifier.

Anyway, odd that the AI put parts of it together, but drops the ball at the end.

Last edited by Fast Eddie B; 25th March 2023 at 01:06 PM.
Fast Eddie B is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 25th March 2023, 01:42 PM   #542
rjh01
Gentleman of leisure
Tagger
 
rjh01's Avatar
 
Join Date: May 2005
Location: Flying around in the sky
Posts: 27,723
Here is one use AI is being used in the medical setting already.
It takes months to train a sonographer. But people with less starting qualifications are able to get usable images immediately with the help of AI.

My comment: This will save big $ and make it available wherever the equipment is available.

https://www.abc.net.au/news/2023-03-...unds/102123878
__________________
This signature is for rent.
rjh01 is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 25th March 2023, 01:44 PM   #543
Myriad
The Clarity Is Devastating
 
Myriad's Avatar
 
Join Date: Nov 2006
Location: Betwixt
Posts: 20,153
Originally Posted by Darat View Post
But they aren't - the first answer is : Therefore, Mrs. Lars shook hands with 1 person and Mr. Lars shook hands with 4 people.

I meant the first two answers you posted: the "Bard's" answer and the "Bing chat" answer. They're identical except for having different line breaks.

I don't know why it turned the 1 into a 0 in the further explanation of "the logic behind it." But the parts I highlighted in the further explanation are pretty clearly the result of confusing the original (10-person) puzzle with the simplified (4-person) version of it that's encountered in many of the solutions/explanations found online. Which I think is an interesting case for one way the chatbots' answers can go wrong.

If given an equivalent problem in terms that don't match the words of the Martin Gardner version in any obvious way (8 people instead of 10, pairs of twins instead of spouses, two-player games instead of handshakes, different order of presenting the constraints, etc.) and so not as easily matched to text found online, would it be able to construct any of the same kind of reasoning at all?

Here, I'll do some of the work:

Four pairs of twins held a board game night. The eight of them were the only participants. During the evening some of them played games of checkers. At the end of the gathering, one of the twins, named Alice, asked how many checkers games each of the other seven people had played. They all told the truth, and it turned out each of them had played a different number of checkers games.

No one played checkers alone, no checkers game had more than two players, no one played checkers against his or her own twin, and no one played checkers more than once against the same opponent.

How many checkers games did Alice play? How many did Alice's twin play?
__________________
"*Except Myriad. Even Cthulhu would give him a pat on the head and an ice cream and send him to the movies while he ended the rest of the world." - Foster Zygote
Myriad is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 28th March 2023, 03:06 AM   #544
Puppycow
Penultimate Amazing
 
Puppycow's Avatar
 
Join Date: Jan 2003
Location: Yokohama, Japan
Posts: 27,952
YouTube Video This video is not hosted by the ISF. The ISF can not be held responsible for the suitability or legality of this material. By clicking the link below you agree to view content from an external website.
I AGREE


ChatGPT wrote the lyrics to a song. Original lyrics of course
__________________
A fool thinks himself to be wise, but a wise man knows himself to be a fool.
William Shakespeare
Puppycow is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 28th March 2023, 03:36 AM   #545
Darat
Lackey
Administrator
 
Darat's Avatar
 
Join Date: Aug 2001
Location: South East, UK
Posts: 109,555
Originally Posted by Myriad View Post
I meant the first two answers you posted: the "Bard's" answer and the "Bing chat" answer. They're identical except for having different line breaks.

I don't know why it turned the 1 into a 0 in the further explanation of "the logic behind it." But the parts I highlighted in the further explanation are pretty clearly the result of confusing the original (10-person) puzzle with the simplified (4-person) version of it that's encountered in many of the solutions/explanations found online. Which I think is an interesting case for one way the chatbots' answers can go wrong.

If given an equivalent problem in terms that don't match the words of the Martin Gardner version in any obvious way (8 people instead of 10, pairs of twins instead of spouses, two-player games instead of handshakes, different order of presenting the constraints, etc.) and so not as easily matched to text found online, would it be able to construct any of the same kind of reasoning at all?

Here, I'll do some of the work:

Four pairs of twins held a board game night. The eight of them were the only participants. During the evening some of them played games of checkers. At the end of the gathering, one of the twins, named Alice, asked how many checkers games each of the other seven people had played. They all told the truth, and it turned out each of them had played a different number of checkers games.

No one played checkers alone, no checkers game had more than two players, no one played checkers against his or her own twin, and no one played checkers more than once against the same opponent.

How many checkers games did Alice play? How many did Alice's twin play?
ChatGPT's response:

Quote:
Since there were eight participants and no one played alone, each game had exactly two players. Therefore, there were a total of $\frac{n(n-1)}{2}$ checkers games played, where $n$ is the number of players.

Since each of the other seven people played a different number of checkers games, and there were a total of $\frac{8 \times 7}{2} = 28$ checkers games played, each of the other seven people must have played between 1 and 6 checkers games.

Let's consider Alice's twin first. Since Alice's twin couldn't have played against Alice, he or she must have played against one of the other six people. Furthermore, since each of the other six people played a different number of games, Alice's twin must have played a number of games that was different from each of the other six people. Therefore, Alice's twin must have played either 1, 3, or 5 games.

Now let's consider Alice. Alice couldn't have played against her twin, so she must have played against one of the other six people. Suppose Alice played $k$ games. Then the other six people played a total of $28 - k$ games, and each of them played a different number of games. Since Alice's twin played either 1, 3, or 5 games, Alice must have played either $6 - 1 = 5$, $6 - 3 = 3$, or $6 - 5 = 1$ game.

Suppose Alice played 5 games. Then the other six people played a total of $28 - 5 = 23$ games among themselves. But there are only 15 possible pairings among the other six people, which means that some pair of them played against each other more than once, which is a contradiction. Therefore, Alice didn't play 5 games.

Suppose Alice played 1 game. Then the other six people played a total of $28 - 1 = 27$ games among themselves. We can easily check that this is impossible, since there are only 15 possible pairings among the other six people, and none of them played more than one game against the same opponent.

Therefore, Alice must have played 3 games, and the other six people played a total of $28 - 3 = 25$ games among themselves. We can check that this is possible: one pair of twins played 5 games each, one pair played 4 games each, one pair played 2 games each, and one pair played 1 game each. Therefore, Alice's twin played 1 game.

In summary, Alice played 3 games, and Alice's twin played 1 game.
Bard

Quote:
Since no one played checkers alone, each checkers game had 2 players. Since no one played checkers against his or her own twin, each twin played with a different player. Since no one played checkers more than once against the same opponent, each twin played with 4 different players.

Since there were 4 pairs of twins, there were 4 x 2 = 8 different players. Since each player played with 4 different players, Alice played 4 x 4 = 16 checkers games.

Alice's twin played with 4 different players as well. However, she did not play with Alice. Therefore, Alice's twin played 4 - 1 = 3 checkers games.
Bing Chat

Quote:
This is a logic puzzle. Since each of the seven people played a different number of games, the number of games played by each person must be 0, 1, 2, 3, 4, 5 and 6. Since no one played against their own twin and no one played more than once against the same opponent, the person who played 6 games must have played against all the other six people. This means that the person who played 0 games must be the twin of the person who played 6 games.

Since Alice’s twin did not play against their own twin, Alice must have played either 0 or 6 games. Since Alice asked how many games each of the other seven people had played, it is likely that Alice herself did not know how many games she had played. This suggests that Alice is the one who played 0 games and her twin is the one who played 6 games.

So Alice played 0 checkers games and her twin played 6 checkers games.
__________________
I wish I knew how to quit you
Darat is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 28th March 2023, 03:37 AM   #546
Darat
Lackey
Administrator
 
Darat's Avatar
 
Join Date: Aug 2001
Location: South East, UK
Posts: 109,555
Is there any way of getting (legal and ethical!) access to ChatGPT 4 directly rather than via Bing Chat's implementation without paying?
__________________
I wish I knew how to quit you
Darat is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 28th March 2023, 11:43 AM   #547
Checkmite
Skepticifimisticalationist
 
Checkmite's Avatar
 
Join Date: Jun 2002
Location: Gulf Coast
Posts: 28,228
Originally Posted by Puppycow View Post
The newest iteration of GPT, GPT-4, can pass a bar exam:

https://www.iit.edu/news/gpt-4-passes-bar-exam
Considering the people who write law textbooks can also pass a bar exam, if GPT has access to those texts this should be expected and predictable.
__________________
"¿WHAT KIND OF BIRD?
¿A PARANORMAL BIRD?"
--- Carlos S., 2002
Checkmite is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 28th March 2023, 11:45 AM   #548
Checkmite
Skepticifimisticalationist
 
Checkmite's Avatar
 
Join Date: Jun 2002
Location: Gulf Coast
Posts: 28,228
Originally Posted by Roger Ramjets View Post
I don't believe anything an AI or any search engine tells me now, without a source that can be evaluated for accuracy. ChatGPT has negatively impacted the rate that I accept and believe any information - good or bad.

But what if they also use AI?
It won't make a difference. "AI" as a buzzword has already long been in use to describe sorting algorithms.
__________________
"¿WHAT KIND OF BIRD?
¿A PARANORMAL BIRD?"
--- Carlos S., 2002
Checkmite is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 28th March 2023, 12:04 PM   #549
Checkmite
Skepticifimisticalationist
 
Checkmite's Avatar
 
Join Date: Jun 2002
Location: Gulf Coast
Posts: 28,228
Originally Posted by Roger Ramjets View Post
The greatest danger is from people thinking sentience is the tipping point. Non-sentient machines are just as capable of hurting us.
I disagree with the framing, though. "The machine" (speaking of AI programs in this case) isn't going to "hurt us". It isn't merely not "the greatest danger", it isn't a realistic danger at all.

The danger is and can only be the decisions and actions of people.

This is important - so extremely important that I cannot stress enough how important it is. It's not some semantical nitpick. If someone is run over by an AI-controlled car, that person was killed by the person or company who was operating that vehicle, not "the AI". It could (and in my opinion, should) also be argued that the company who manufactured it is also liable for selling a defective product, just like any other company would be in any other instance that doesn't involve "AI".
__________________
"¿WHAT KIND OF BIRD?
¿A PARANORMAL BIRD?"
--- Carlos S., 2002
Checkmite is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 28th March 2023, 05:53 PM   #550
Fast Eddie B
Philosopher
 
Fast Eddie B's Avatar
 
Join Date: Sep 2010
Location: Lenoir City, TN/Mineral Bluff, GA
Posts: 7,555
Upthread I mentioned that ChatGPT had made a math error in a fairly straightforward physics problem I gave. The error was pointed out by a fellow on another forum using a calculator. I verified his answer with WolframAlpha. I wondered why they couldn’t have ChatGPT “reach out” to calculator programs and/or WolframAlpha to “check its work”.

Apparently Stephen Wolfram himself had a similar thought!

https://youtu.be/z5WZhCBRDpU
Fast Eddie B is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 28th March 2023, 07:20 PM   #551
Checkmite
Skepticifimisticalationist
 
Checkmite's Avatar
 
Join Date: Jun 2002
Location: Gulf Coast
Posts: 28,228
Here's another example of how assumptions and mistaken impressions lead to people just enthusiastically vomiting up misinformation about AI. Senator Chris Murphy (D-CT) tweeted this:

Quote:
ChatGPT taught itself to do advanced chemistry. It wasn't built into the model. Nobody programmed it to learn complicated chemistry. It decided to teach itself, then made its knowledge available to anyone who asked.

Something is coming. We aren't ready.
It's utterly false. Nobody who is a chemist seems to agree with the assessment that ChatGPT "knows chemistry"; indeed, several reply tweets assert quite the opposite. ChatGPT just sounds like it knows advanced chemistry to people who don't, because ChatGPT knows what chemists "sound like" in text and is able to replicate that.
__________________
"¿WHAT KIND OF BIRD?
¿A PARANORMAL BIRD?"
--- Carlos S., 2002
Checkmite is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 28th March 2023, 07:56 PM   #552
Checkmite
Skepticifimisticalationist
 
Checkmite's Avatar
 
Join Date: Jun 2002
Location: Gulf Coast
Posts: 28,228
If you need an example of how good ChatGPT ISN'T at even basic chemistry, we have:

What would have to be done to an atom of oxygen for it to become an atom of silicon?

Quote:
To convert an atom of oxygen (O) into an atom of silicon (Si), oxygen would need to gain four additional electrons and lose two electrons.

This process is called "reduction" and is typically achieved by reacting oxygen with a reducing agent that can donate four electrons. One such reducing agent is metallic silicon itself, which can react with oxygen to form silicon dioxide (SiO2).

In the reaction, two atoms of oxygen combine with one atom of silicon to form SiO2:

2 Si + O2 → 2 SiO2

Each oxygen atom gains two electrons from the silicon atoms, giving it a total of eight electrons in its outermost shell, which is the same as the electron configuration of a silicon atom. The silicon atoms, in turn, lose four electrons to become silicon ions with a +4 charge, which have the same electron configuration as a neutral silicon atom.

Therefore, the process of converting an atom of oxygen into an atom of silicon involves a chemical reaction rather than a nuclear reaction.
This is...bad.
__________________
"¿WHAT KIND OF BIRD?
¿A PARANORMAL BIRD?"
--- Carlos S., 2002
Checkmite is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 28th March 2023, 07:57 PM   #553
bruto
Penultimate Amazing
 
bruto's Avatar
 
Join Date: Jun 2005
Location: Way way north of Diddy Wah Diddy
Posts: 34,758
I did hear an interesting discussion with Stephen Wolfram, who has apparently enabled ChatGPT to utilize Wolfram Alpha and whatnot to do the mathematics it isn't set up for.

https://www.youtube.com/watch?v=z5WZhCBRDpU
__________________
Like many humorless and indignant people, he is hard on everybody but himself, and does not perceive it when he fails his own ideal (Molière)

A pedant is a man who studies a vacuum through instruments that allow him to draw cross-sections of the details (John Ciardi)
bruto is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 28th March 2023, 09:52 PM   #554
Pulvinar
Graduate Poster
 
Join Date: Aug 2009
Posts: 1,469
Originally Posted by Checkmite View Post
This is...bad.
They appear to have retrained it on this one, as even legacy 3.5 mode now gives:
Quote:
For an oxygen atom to become a silicon atom, it would have to undergo a nuclear reaction in which its atomic nucleus is transformed. This process is called nuclear transmutation and can be achieved through a variety of means, such as bombardment by high-energy particles, exposure to high temperatures and pressures, or the fusion of lighter elements.

In particular, to transmute an oxygen nucleus into a silicon nucleus, a neutron would need to be added to the oxygen nucleus, which would then undergo beta decay and become a silicon nucleus. This process typically requires high energy and specialized equipment, such as a particle accelerator or a nuclear reactor.

It's important to note that transmutation of elements is not a common occurrence in everyday life, and should be approached with caution due to the potential dangers and radioactivity involved.
I'm thinking OpenAI has people watching this thread...
Pulvinar is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 28th March 2023, 11:00 PM   #555
Checkmite
Skepticifimisticalationist
 
Checkmite's Avatar
 
Join Date: Jun 2002
Location: Gulf Coast
Posts: 28,228
That's certainly better than "the very first sentence is already wrong" levels of wrong, but also still just wrong. Adding a neutron to an oxygen nucleus would NOT transmute it into a silicon nucleus. If you add a single neutron to an oxygen nucleus and that neutron beta-decays into a proton you've increased the oxygen atom's atomic number (6, originally) by one - i.e., you'll get flourine.

I don't think it's an improvement they've made over the last couple of hours. I think it's a case of, because the machine doesn't actually "know" enough chemistry to decide how likely any of the multiple responses its generated is to be more correct than the others, it just randomly chooses one to output, and that can be different for different users.
__________________
"¿WHAT KIND OF BIRD?
¿A PARANORMAL BIRD?"
--- Carlos S., 2002
Checkmite is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 29th March 2023, 12:06 AM   #556
Pulvinar
Graduate Poster
 
Join Date: Aug 2009
Posts: 1,469
GPT-4 gives this:
Quote:
To transform an atom of oxygen into an atom of silicon, you would need to change the number of protons in the nucleus. Oxygen has 8 protons, while silicon has 14. So, you would need to add 6 protons to the oxygen atom's nucleus to turn it into silicon.

However, doing this is not feasible by any practical means currently available to us. Nuclear reactions, such as those occurring in stars or nuclear reactors, can change the atomic composition of elements, but these processes are extremely complex and involve the interaction of many particles under high energy and pressure conditions.

Additionally, you would also need to account for the changes in the number of neutrons and electrons. Silicon has 14 electrons and typically 14 neutrons in its most abundant isotope (silicon-28), whereas oxygen has 8 electrons and 8 neutrons in its most abundant isotope (oxygen-16). To make an oxygen atom into a silicon atom, you would need to add 6 neutrons and 6 electrons as well.

In summary, while it is theoretically possible to change an atom of oxygen into an atom of silicon by altering its subatomic particles, such a transformation is beyond our current technological capabilities and understanding.
Pulvinar is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 29th March 2023, 12:12 AM   #557
Pulvinar
Graduate Poster
 
Join Date: Aug 2009
Posts: 1,469
Giving it a chance for a rebuttal:

Quote:
How would you respond to this post by a reader? "I don't think it's an improvement they've made over the last couple of hours. I think it's a case of, because the machine doesn't actually "know" enough chemistry to decide how likely any of the multiple responses its generated is to be more correct than the others, it just randomly chooses one to output, and that can be different for different users."

Thank you for sharing your thoughts on the matter. It is important to clarify that, as an AI language model, my understanding of chemistry or any other subject is based on the vast amount of text data that I have been trained on. While I strive to provide accurate and relevant information, there might be instances where the generated response may not be as precise or accurate as desired.

It's true that the model can sometimes generate multiple plausible responses, and the selection of a specific response depends on various factors, including how the input question is framed and the underlying algorithms used to pick the most suitable answer. While I don't "randomly" choose answers, there may be some variation in the responses generated for different users or even the same user asking a similar question at different times.


As an AI language model, I am constantly evolving and improving based on feedback from users like you. Your comments are valuable in helping me refine my understanding and improve the quality of my responses. If you have any concerns or suggestions for improvement, please feel free to share them.
Pulvinar is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 29th March 2023, 02:17 AM   #558
Darat
Lackey
Administrator
 
Darat's Avatar
 
Join Date: Aug 2001
Location: South East, UK
Posts: 109,555
Originally Posted by Checkmite View Post
Here's another example of how assumptions and mistaken impressions lead to people just enthusiastically vomiting up misinformation about AI. Senator Chris Murphy (D-CT) tweeted this:



It's utterly false. Nobody who is a chemist seems to agree with the assessment that ChatGPT "knows chemistry"; indeed, several reply tweets assert quite the opposite. ChatGPT just sounds like it knows advanced chemistry to people who don't, because ChatGPT knows what chemists "sound like" in text and is able to replicate that.
For me this is one of the major points of concern with the current chat bots, as humans we are trained to listen to people who speak with "authority" and most of us will have as a heuristic that we associate "well spoken" with expertise. Which I think means we will be less sceptical about their results than we should be.
__________________
I wish I knew how to quit you
Darat is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 29th March 2023, 02:05 PM   #559
Pulvinar
Graduate Poster
 
Join Date: Aug 2009
Posts: 1,469
Originally Posted by Darat View Post
For me this is one of the major points of concern with the current chat bots, as humans we are trained to listen to people who speak with "authority" and most of us will have as a heuristic that we associate "well spoken" with expertise. Which I think means we will be less sceptical about their results than we should be.
Skepticism is always wise, and should be equally applied to what authoritative well-spoken humans say. I'm finding myself often arguing with it where it takes the popular human side of an issue, because that's what the majority of its training data says, reason be damned.
Pulvinar is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 29th March 2023, 08:35 PM   #560
rjh01
Gentleman of leisure
Tagger
 
rjh01's Avatar
 
Join Date: May 2005
Location: Flying around in the sky
Posts: 27,723
How was silicon made?

Originally Posted by chatGPT
Silicon is believed to have been formed through nuclear fusion processes in the cores of massive stars. During the later stages of a star's life, when it has exhausted its hydrogen and helium fuel, it begins to fuse heavier elements such as carbon, oxygen, and neon.

As the fusion process continues, heavier and heavier elements are formed, ultimately leading to the production of silicon. Specifically, silicon is formed through a process known as alpha process nucleosynthesis, in which helium nuclei (alpha particles) are combined with lighter elements such as carbon, nitrogen, and oxygen to form heavier elements, including silicon.

Once formed, silicon can be released into space through stellar winds or during supernova explosions. Over time, this silicon-rich material can become incorporated into new stars and planets, including our own Solar System. In fact, silicon is the second most abundant element in the Earth's crust (after oxygen), and it is a key component of many minerals and rocks, including quartz, feldspar, and mica.
__________________
This signature is for rent.
rjh01 is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Reply

International Skeptics Forum » General Topics » Science, Mathematics, Medicine, and Technology

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump


All times are GMT -7. The time now is 09:58 PM.
Powered by vBulletin. Copyright ©2000 - 2023, Jelsoft Enterprises Ltd.

This forum began as part of the James Randi Education Foundation (JREF). However, the forum now exists as
an independent entity with no affiliation with or endorsement by the JREF, including the section in reference to "JREF" topics.

Disclaimer: Messages posted in the Forum are solely the opinion of their authors.