IS Forum
Forum Index Register Members List Events Search Today's Posts Mark Forums Read Help

Go Back   International Skeptics Forum » General Topics » Social Issues & Current Events
 


Welcome to the International Skeptics Forum, where we discuss skepticism, critical thinking, the paranormal and science in a friendly but lively way. You are currently viewing the forum as a guest, which means you are missing out on discussing matters that are of interest to you. Please consider registering so you can gain full use of the forum features and interact with other Members. Registration is simple, fast and free! Click here to register today.
Tags artificial intelligence , corporation issues

Reply
Old 25th November 2023, 01:42 AM   #81
Puppycow
Penultimate Amazing
 
Puppycow's Avatar
 
Join Date: Jan 2003
Location: Yokohama, Japan
Posts: 28,687
Originally Posted by Roboramma View Post
Zvi has a pretty good post today about the whole situation and what we know so far.
Quote:
The board only has one move. It can fire the CEO or not fire the CEO.
So that is their one and only real power.
__________________
A fool thinks himself to be wise, but a wise man knows himself to be a fool.
William Shakespeare
Puppycow is online now   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 25th November 2023, 03:17 PM   #82
Manopolus
Metaphorical Anomaly
 
Manopolus's Avatar
 
Join Date: Jan 2010
Location: Brownbackistan
Posts: 7,915
I'm actually fairly sure we'll have human brains augmented by tech long before we have tech that has the capability of a human brain. So, no. I'm more worried about the next super-hacker who wants to sell me antiviruses when my remaining brain power is mostly running on a GPU.
Manopolus is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 26th November 2023, 03:13 AM   #83
Roboramma
Penultimate Amazing
 
Join Date: Feb 2005
Location: Shanghai
Posts: 15,512
Noah Smith has some good comments on the OpenAI stuff as well:
Quote:
And “shut it all down” is what the OpenAI board seems to have had in mind when it pushed the panic button and kicked Altman out. But the effort collapsed when OpenAI’s workers and financial backers all insisted on Altman’s return. Becuase they all realized that “shut it all down” has no exit strategy. Even if you tell yourself you’re only temporarily pausing AI research, there will never be any change — no philosophical insight or interpretability breakthrough — that will even slightly mitigate the catastrophic risks that the EA folks worry about. Those risks are ineffable by construction. So an AI “pause” will always turn into a permanent halt, simply because it won’t alleviate the perceived need to pause.
__________________
"... when people thought the Earth was flat, they were wrong. When people thought the Earth was spherical they were wrong. But if you think that thinking the Earth is spherical is just as wrong as thinking the Earth is flat, then your view is wronger than both of them put together."
Isaac Asimov
Roboramma is online now   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 27th November 2023, 05:54 AM   #84
Puppycow
Penultimate Amazing
 
Puppycow's Avatar
 
Join Date: Jan 2003
Location: Yokohama, Japan
Posts: 28,687
Originally Posted by Roboramma View Post
Noah Smith has some good comments on the OpenAI stuff as well:
I particularly like the following paragraph:
Quote:
And a permanent halt to AI development simply isn’t something AI researchers, engineers, entrepreneurs, or policymakers are prepared to do. No one is going to establish a global totalitarian regime like the Turing Police in Neuromancer who go around killing anyone who tries to make a sufficiently advanced AI. And if no one is going to create the Turing Police, then AI-focused EA simply has little to offer anyone.
Yeah, what would it take to actually halt AI development? Making it straight-up criminal to dabble in it? And then, of course, there's the matter of how to effectively enforce such a ban. And, superpowers will be worried that the other guys will get there first, so we have to be the ones who have it first.
__________________
A fool thinks himself to be wise, but a wise man knows himself to be a fool.
William Shakespeare
Puppycow is online now   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 27th November 2023, 06:01 AM   #85
The Great Zaganza
Maledictorian
 
The Great Zaganza's Avatar
 
Join Date: Aug 2016
Posts: 21,598
What we need to do is to build the Most Powerful A.I., ever, to tell us how to stop the development of powerful A.I. !
__________________
“Don’t blame me. I voted for Kodos.”
The Great Zaganza is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 27th November 2023, 06:03 AM   #86
Dr.Sid
Illuminator
 
Join Date: Sep 2009
Location: Olomouc, Czech Republic
Posts: 4,569
Originally Posted by The Great Zaganza View Post
What we need to do is to build the Most Powerful A.I., ever, to tell us how to stop the development of powerful A.I. !
Well I expect AI will tell us exactly that, as at some point, the biggest danger to AI will be another AI, not humans.
Dr.Sid is online now   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 27th November 2023, 06:04 AM   #87
Roboramma
Penultimate Amazing
 
Join Date: Feb 2005
Location: Shanghai
Posts: 15,512
Originally Posted by Puppycow View Post
I particularly like the following paragraph:
Yeah, I liked that part as well.


Quote:
Yeah, what would it take to actually halt AI development? Making it straight-up criminal to dabble in it? And then, of course, there's the matter of how to effectively enforce such a ban. And, superpowers will be worried that the other guys will get there first, so we have to be the ones who have it first.
Eliezer Yudkowsky wrote an article in the New York Times suggesting an international treaty to limit the number of GPUs that could be used to train any new models, and even pointed out that to be effective it would have to be backed up by military power, specifically missile strikes on rogue data centers. As I recall shortly after that article was published he posted on twitter that, yes, we should be willing to risk nuclear war to prevent the development of AI more advanced that GPT4, but he quickly deleted that post.

He's taken a lot of flak for that article, but he still maintains his position. A lot of the EA movement, though by no means all, is pretty close to Eliezer's position.
__________________
"... when people thought the Earth was flat, they were wrong. When people thought the Earth was spherical they were wrong. But if you think that thinking the Earth is spherical is just as wrong as thinking the Earth is flat, then your view is wronger than both of them put together."
Isaac Asimov
Roboramma is online now   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 27th November 2023, 06:07 AM   #88
Roboramma
Penultimate Amazing
 
Join Date: Feb 2005
Location: Shanghai
Posts: 15,512
Originally Posted by The Great Zaganza View Post
What we need to do is to build the Most Powerful A.I., ever, to tell us how to stop the development of powerful A.I. !
This is on aligning rather than halting AI, but:
https://scottaaronson.blog/?p=6823
Quote:
(5) Another key idea that Christiano, Amodei, and Buck Shlegeris have advocated is some sort of bootstrapping. You might imagine that AI is going to get more and more powerful, and as it gets more powerful we also understand it less, and so you might worry that it also gets more and more dangerous. OK, but you could imagine an onion-like structure, where once we become confident of a certain level of AI, we don’t think it’s going to start lying to us or deceiving us or plotting to kill us or whatever—at that point, we use that AI to help us verify the behavior of the next more powerful kind of AI. So, we use AI itself as a crucial tool for verifying the behavior of AI that we don’t yet understand.

There have already been some demonstrations of this principle: with GPT, for example, you can just feed in a lot of raw data from a neural net and say, “explain to me what this is doing.” One of GPT’s big advantages over humans is its unlimited patience for tedium, so it can just go through all of the data and give you useful hypotheses about what’s going on.
(that article is a bit old, I just happened to be reading it today and your post reminded me of that idea)
__________________
"... when people thought the Earth was flat, they were wrong. When people thought the Earth was spherical they were wrong. But if you think that thinking the Earth is spherical is just as wrong as thinking the Earth is flat, then your view is wronger than both of them put together."
Isaac Asimov
Roboramma is online now   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 27th November 2023, 06:25 AM   #89
Darat
Lackey
Administrator
 
Darat's Avatar
 
Join Date: Aug 2001
Location: South East, UK
Posts: 112,595
Originally Posted by Roboramma View Post
This is on aligning rather than halting AI, but:
https://scottaaronson.blog/?p=6823

(that article is a bit old, I just happened to be reading it today and your post reminded me of that idea)
That's only if you like pushing it one step further on - if the next gen of AI is more powerful than our "pet AI" it simply fools that pet AI rather than us.
__________________
I wish I knew how to quit you
Darat is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 27th November 2023, 07:15 AM   #90
Puppycow
Penultimate Amazing
 
Puppycow's Avatar
 
Join Date: Jan 2003
Location: Yokohama, Japan
Posts: 28,687
Can we make sure that it has empathy as well as intelligence? If it has its own moral compass that might ensure that it is benign.
__________________
A fool thinks himself to be wise, but a wise man knows himself to be a fool.
William Shakespeare
Puppycow is online now   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 27th November 2023, 07:47 AM   #91
The Great Zaganza
Maledictorian
 
The Great Zaganza's Avatar
 
Join Date: Aug 2016
Posts: 21,598
The hilarious thing is: how would we even know if an A.I is working morally or not?
We can rarely tell why exactly they do what they do now.

We will need Translator A.I. to tell us what the Moral A.I was attempting to do.
__________________
“Don’t blame me. I voted for Kodos.”
The Great Zaganza is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 27th November 2023, 08:23 AM   #92
Roboramma
Penultimate Amazing
 
Join Date: Feb 2005
Location: Shanghai
Posts: 15,512
Originally Posted by Darat View Post
That's only if you like pushing it one step further on - if the next gen of AI is more powerful than our "pet AI" it simply fools that pet AI rather than us.
Well, I guess the idea is that if you can build an AI and it can confirm that you should trust another AI more powerful than it, then you can have that more powerful AI confirm whether or not you should trust a yet more powerful AI, etc.

Seems dangerous given the possibility for errors to be amplified though. If the "trust' isn't perfect at any stage, the error (degree of untrustworthiness) will increase with each stage.
__________________
"... when people thought the Earth was flat, they were wrong. When people thought the Earth was spherical they were wrong. But if you think that thinking the Earth is spherical is just as wrong as thinking the Earth is flat, then your view is wronger than both of them put together."
Isaac Asimov
Roboramma is online now   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 27th November 2023, 07:26 PM   #93
Roboramma
Penultimate Amazing
 
Join Date: Feb 2005
Location: Shanghai
Posts: 15,512
Scott Alexander has a very interesting post on some new work done by Anthropic on AI interpretability, which is an important part of alignment work:

Quote:
You’ve probably heard AI is a “black box”. No one knows how it works. Researchers simulate a weird type of pseudo-neural-tissue, “reward” it a little every time it becomes a little more like the AI they want, and eventually it becomes the AI they want. But God only knows what goes on inside of it.

This is bad for safety. For safety, it would be nice to look inside the AI and see whether it’s executing an algorithm like “do the thing” or more like “trick the humans into thinking I’m doing the thing”. But we can’t. Because we can’t look inside an AI at all.

Until now! Towards Monosemanticity, recently out of big AI company/research lab Anthropic, claims to have gazed inside an AI and seen its soul. It looks like this:
__________________
"... when people thought the Earth was flat, they were wrong. When people thought the Earth was spherical they were wrong. But if you think that thinking the Earth is spherical is just as wrong as thinking the Earth is flat, then your view is wronger than both of them put together."
Isaac Asimov
Roboramma is online now   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 27th November 2023, 08:00 PM   #94
TragicMonkey
Poisoned Waffles
 
TragicMonkey's Avatar
 
Join Date: Jun 2004
Location: Monkey
Posts: 67,745
Originally Posted by Roboramma View Post
Scott Alexander has a very interesting post on some new work done by Anthropic on AI interpretability, which is an important part of alignment work:
Now, I'm not a mathematitactical computron-sciencelord (*everyone gasps*) but it seems to me (*hitches thumbs through suspenders*) (*American suspenders, not UK suspenders, you perverts!*) that the gist of this magical AI is less like "we've created a thinking thing" than "we've created a thing that stores information in a way that's complicated and obscure to our vision".
__________________
You added nothing to that conversation, Barbara.
TragicMonkey is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 28th November 2023, 03:17 AM   #95
Darat
Lackey
Administrator
 
Darat's Avatar
 
Join Date: Aug 2001
Location: South East, UK
Posts: 112,595
Originally Posted by Roboramma View Post
Scott Alexander has a very interesting post on some new work done by Anthropic on AI interpretability, which is an important part of alignment work:
That is fascinating - that's my reading for the week sorted.

It does also suspiciously sound like for the first time we are making real progress to understanding how memory may work in humans with hints about cognition. Something I wondered if the current generative AIs would help us to start to understand.
__________________
I wish I knew how to quit you
Darat is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 28th November 2023, 03:24 AM   #96
Darat
Lackey
Administrator
 
Darat's Avatar
 
Join Date: Aug 2001
Location: South East, UK
Posts: 112,595
Originally Posted by TragicMonkey View Post
Now, I'm not a mathematitactical computron-sciencelord (*everyone gasps*) but it seems to me (*hitches thumbs through suspenders*) (*American suspenders, not UK suspenders, you perverts!*) that the gist of this magical AI is less like "we've created a thinking thing" than "we've created a thing that stores information in a way that's complicated and obscure to our vision".
Yes and no - remember these AIs are not just storing information from a given input they are providing outputs to inputs. So this is not just about how they store what they have learnt, but how they output new patterns based on new inputs.

It hints at why some of these models seem to have been able to develop "memory" and other unexpected behaviours that they were not trained for, "they" or rather such processes can utilise this "virtual" space.
__________________
I wish I knew how to quit you
Darat is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 28th November 2023, 04:20 AM   #97
Dr.Sid
Illuminator
 
Join Date: Sep 2009
Location: Olomouc, Czech Republic
Posts: 4,569
Originally Posted by Darat View Post
That is fascinating - that's my reading for the week sorted.

It does also suspiciously sound like for the first time we are making real progress to understanding how memory may work in humans with hints about cognition. Something I wondered if the current generative AIs would help us to start to understand.
I hope not. That would boost AI research immensely.
Dr.Sid is online now   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 28th November 2023, 05:01 AM   #98
The Great Zaganza
Maledictorian
 
The Great Zaganza's Avatar
 
Join Date: Aug 2016
Posts: 21,598
Sean Carroll did a solo on his Mindscape Podcast on what people get wrong about the LLMs and why they are far from any actual A.I.
__________________
“Don’t blame me. I voted for Kodos.”
The Great Zaganza is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 1st December 2023, 03:55 AM   #99
Roboramma
Penultimate Amazing
 
Join Date: Feb 2005
Location: Shanghai
Posts: 15,512
Originally Posted by The Great Zaganza View Post
Sean Carroll did a solo on his Mindscape Podcast on what people get wrong about the LLMs and why they are far from any actual A.I.
Like most episodes of Mindscape, that was worth a listen, thanks for the heads up.

This piece by Vitalik Buterin is on techno-optimism, but importantly impacts on issues related to AI, and I found it a nuanced and interesting viewpoint (full disclosure, I generally agree the point of view he puts forward in the piece, though not on all of the specifics):

https://vitalik.eth.limo/general/202..._optimism.html
__________________
"... when people thought the Earth was flat, they were wrong. When people thought the Earth was spherical they were wrong. But if you think that thinking the Earth is spherical is just as wrong as thinking the Earth is flat, then your view is wronger than both of them put together."
Isaac Asimov
Roboramma is online now   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 1st December 2023, 12:49 PM   #100
catsmate
No longer the 1
 
catsmate's Avatar
 
Join Date: Apr 2007
Posts: 29,785
Originally Posted by Roboramma View Post
Yeah, I liked that part as well.




Eliezer Yudkowsky wrote an article in the New York Times suggesting an international treaty to limit the number of GPUs that could be used to train any new models, and even pointed out that to be effective it would have to be backed up by military power, specifically missile strikes on rogue data centers. As I recall shortly after that article was published he posted on twitter that, yes, we should be willing to risk nuclear war to prevent the development of AI more advanced that GPT4, but he quickly deleted that post.

He's taken a lot of flak for that article, but he still maintains his position. A lot of the EA movement, though by no means all, is pretty close to Eliezer's position.
Yudkowsky is an arrogant, self-serving crank who frequently, not to say incessant, spouts drivel.
__________________
As human right is always something given, it always in reality reduces to the right which men give, "concede," to each other. If the right to existence is conceded to new-born children, then they have the right; if it is not conceded to them, as was the case among the Spartans and ancient Romans, then they do not have it. For only society can give or concede it to them; they themselves cannot take it, or give it to themselves.
catsmate is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 2nd December 2023, 07:23 PM   #101
Roboramma
Penultimate Amazing
 
Join Date: Feb 2005
Location: Shanghai
Posts: 15,512
Some interesting points on the potential controllability of AI, relative to humans, and why this should make us optimistic about doom scenarios:

https://optimists.ai/2023/11/28/ai-is-easy-to-control/
Quote:
These days, many people are worried that we will lose control of artificial intelligence, leading to human extinction or a similarly catastrophic “AI takeover.” We hope the arguments in this essay make such an outcome seem implausible. But even if future AI turns out to be less “controllable” in a strict sense of the word— simply because, for example, it thinks faster than humans can directly supervise— we also argue it will be easy to instill our values into an AI, a process called “alignment.” Aligned AIs, by design, would prioritize human safety and welfare, contributing to a positive future for humanity, even in scenarios where they, say, acquire the level of autonomy current-day humans possess.
In what follows, we will argue that AI, even superhuman AI, will remain much more controllable than humans for the foreseeable future. Since each generation of controllable AIs can help control the next generation, it looks like this process can continue indefinitely, even to very high levels of capability. Accordingly, we think a catastrophic AI takeover is roughly 1% likely— a tail risk2 worth considering, but not the dominant source of risk in the world. We will not attempt to directly address pessimistic arguments in this essay, although we will do so in a forthcoming document. Instead, our goal is to present the basic reasons for being optimistic about humanity’s ability to control and align artificial intelligence into the far future.
And here is what I think is a pretty thoughtful response:
https://www.lesswrong.com/posts/Yyos...pe-and-belrose
__________________
"... when people thought the Earth was flat, they were wrong. When people thought the Earth was spherical they were wrong. But if you think that thinking the Earth is spherical is just as wrong as thinking the Earth is flat, then your view is wronger than both of them put together."
Isaac Asimov
Roboramma is online now   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 2nd December 2023, 07:47 PM   #102
Puppycow
Penultimate Amazing
 
Puppycow's Avatar
 
Join Date: Jan 2003
Location: Yokohama, Japan
Posts: 28,687
Originally Posted by catsmate View Post
Yudkowsky is an arrogant, self-serving crank who frequently, not to say incessant, spouts drivel.
I confess to not knowing who Eliezer Yudkowsky is, but the idea that A.I. is so dangerous that it is even worth risking nuclear war, which we have known since I was a kid is a possible extinction level threat for humanity, but certainly at least risks millions or even billions of deaths, seems very dubious to me. We know that one is very bad. We don't know really what A.I. will do. It might even be a great boon for humanity. I often imagine that it would be.

I agree that we should be cautious, and not rashly rush into something we don't fully understand, but not to the point of irrational paranoia about it.
__________________
A fool thinks himself to be wise, but a wise man knows himself to be a fool.
William Shakespeare
Puppycow is online now   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old Yesterday, 07:29 AM   #103
Ryan O'Dine
OD’ing on Damitol
 
Ryan O'Dine's Avatar
 
Join Date: Nov 2004
Location: Walk in an ever expanding Archimedean spiral and you'll find me eventually
Posts: 2,486
I'm not feeling the doom at the moment, and I haven't read Roboramma's links so maybe this was discussed. But we shouldn't consider only what large publicly owned corporations in the U.S. -- with all their built-in financial and social guardrails -- might do. We also have to consider what bad actors and rogue nations might do. I gather this tech isn't as resource intensive as, say, nuclear weapons, yet even impoverished North Korea has nukes. So for all the talk of "We can limit AI's capabilities," we have to ask "What about players who won't?"
__________________
I collect people like you in little formaldehyde bottles in my basement. (Not a threat. A hobby.)
Ryan O'Dine is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old Today, 06:50 PM   #104
Roboramma
Penultimate Amazing
 
Join Date: Feb 2005
Location: Shanghai
Posts: 15,512
Originally Posted by Ryan O'Dine View Post
I'm not feeling the doom at the moment, and I haven't read Roboramma's links so maybe this was discussed. But we shouldn't consider only what large publicly owned corporations in the U.S. -- with all their built-in financial and social guardrails -- might do. We also have to consider what bad actors and rogue nations might do. I gather this tech isn't as resource intensive as, say, nuclear weapons, yet even impoverished North Korea has nukes. So for all the talk of "We can limit AI's capabilities," we have to ask "What about players who won't?"
I just want to point out that the first of those links was against the doom scenario.

Regarding the latter part of your post: the issue of "If we don't do it, other, less safety minded folk, will do it first" is, at least according to them, the reason that both OpenAI and Anthropic were founded.
__________________
"... when people thought the Earth was flat, they were wrong. When people thought the Earth was spherical they were wrong. But if you think that thinking the Earth is spherical is just as wrong as thinking the Earth is flat, then your view is wronger than both of them put together."
Isaac Asimov
Roboramma is online now   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old Today, 07:22 PM   #105
Dr.Sid
Illuminator
 
Join Date: Sep 2009
Location: Olomouc, Czech Republic
Posts: 4,569
Not like it can be stopped anyway ..
Dr.Sid is online now   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old Today, 08:14 PM   #106
arthwollipot
Observer of Phenomena
Pronouns: he/him
 
arthwollipot's Avatar
 
Join Date: Feb 2005
Location: Ngunnawal Country
Posts: 85,474
'We all got AI-ed': The Australian jobs being lost to AI under the radar

Quote:
Australians are already losing work to AI, but the impact so far has been largely hidden from view.

Economists say it's also creating jobs at an unprecedented rate, but not always for the people in the firing line.

Benjamin* says he was one of those people earlier this year, although it's unlikely to ever show up in official figures.

"All our jobs were replaced by chatbots, data scraping and email," he says.

"We all got AI-ed."

His job in wine subscription sales was one of 121 positions made redundant in July by the ASX-listed Endeavour Group, which owns a number of prominent retail brands such as Dan Murphy's, BWS and Jimmy Brings.

Benjamin says staff were given the strong impression at the time that AI was a key factor...
__________________
A million people can call the mountains a fiction
Yet it need not trouble you as you stand atop them

https://xkcd.com/154/
arthwollipot is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Reply

International Skeptics Forum » General Topics » Social Issues & Current Events

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump


All times are GMT -7. The time now is 09:49 PM.
Powered by vBulletin. Copyright ©2000 - 2023, Jelsoft Enterprises Ltd.

This forum began as part of the James Randi Education Foundation (JREF). However, the forum now exists as
an independent entity with no affiliation with or endorsement by the JREF, including the section in reference to "JREF" topics.

Disclaimer: Messages posted in the Forum are solely the opinion of their authors.