ISF Logo   IS Forum
Forum Index Register Members List Events Mark Forums Read Help

Go Back   International Skeptics Forum » General Topics » Science, Mathematics, Medicine, and Technology
 


Welcome to the International Skeptics Forum, where we discuss skepticism, critical thinking, the paranormal and science in a friendly but lively way. You are currently viewing the forum as a guest, which means you are missing out on discussing matters that are of interest to you. Please consider registering so you can gain full use of the forum features and interact with other Members. Registration is simple, fast and free! Click here to register today.
Tags electromagnetic fields , photons , radio

Reply
Old 29th December 2011, 11:29 AM   #1
julius
New Blood
 
Join Date: Sep 2010
Posts: 6
How does radio work?

Hi,

I am wondering how radio works. I have read that radio is a an electromagnetic wave and that electromagnetic waves behave a bit like mechanic waves, like the ones you see in water, but that they are also quite different.

I would like to know how an electromagnetic wave that is broadcast by a radio station can "contain" a song. I know that a radio wave has an amplitude and a frequency, but isn't a song a mixture of a lot of tones with different frequencies? So how is a song packaged in a radio wave? Or do they use multiple waves to carry a song?
julius is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 29th December 2011, 11:37 AM   #2
Crossbow
Seeking Honesty and Sanity
 
Crossbow's Avatar
 
Join Date: Oct 2001
Location: Charleston, WV
Posts: 13,034
Originally Posted by julius View Post
Hi,

I am wondering how radio works. I have read that radio is a an electromagnetic wave and that electromagnetic waves behave a bit like mechanic waves, like the ones you see in water, but that they are also quite different.

I would like to know how an electromagnetic wave that is broadcast by a radio station can "contain" a song. I know that a radio wave has an amplitude and a frequency, but isn't a song a mixture of a lot of tones with different frequencies? So how is a song packaged in a radio wave? Or do they use multiple waves to carry a song?
Wow!

It is difficult to explain just how electromagnetic waves work without covering a few other things first.

Also, there are two different methods of commercial radio: FM and AM. Again, it is difficlut to explain these terms without first having a good grasp on electromagnetic waves and some basic electronics.

However, I may be able to point you to some books and such which discuss these topics. Would that help?
__________________
On 22 JUL 2016, Candidate Donald Trump in his acceptance speech: "There can be no prosperity without law and order."
On 05 FEB 2019, President Donald Trump said in his Sate of the Union Address: "If there is going to be peace and legislation, there cannot be war and investigation."
On 15 FEB 2019 'BobTheCoward' said: "I constantly assert I am a fool."
A man's best friend is his dogma.
Crossbow is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 29th December 2011, 11:49 AM   #3
julius
New Blood
 
Join Date: Sep 2010
Posts: 6
Hello Crossbow,

I know, it's a tough subject. I have also read about modulation (AM, FM, ODFM) and I understand how with AM the amplitude of the wave is used to encode information and with FM the frequeny. So I understand that with either modulation type you can encode a binary data stream.

But if that is how the song is transported, then I am still wondering how the song itself is encoded. How are all the tones that happen simultaneously encoded in a binary datastream that is sent/received/read/decoded sequentially? Or am I all wrong and is this not how it works?

Oh, and book suggestions are always welcome of course.

Last edited by julius; 29th December 2011 at 11:50 AM.
julius is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 29th December 2011, 11:51 AM   #4
BowlOfRed
Graduate Poster
 
BowlOfRed's Avatar
 
Join Date: Jul 2010
Location: Silicon Valley
Posts: 1,733
Originally Posted by julius View Post
I would like to know how an electromagnetic wave that is broadcast by a radio station can "contain" a song. I know that a radio wave has an amplitude and a frequency, but isn't a song a mixture of a lot of tones with different frequencies? So how is a song packaged in a radio wave? Or do they use multiple waves to carry a song?
The "carrier" wave is at a very high frequency (compared to the waves in the audio portion of the song). And since it's a wave, it's very repetitive. You can predict what the wave "should" look like.

The carrier wave is then deformed in a way corresponding to the signal. So the receiver can see the deformation and turn that back into the information from the song.

This wikipedia image shows an information signal (top) that is used to deform two carrier waves. The first one is deformed in amplitude and the second in frequency.

http://en.wikipedia.org/wiki/File:Amfm3-en-de.gif
BowlOfRed is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 29th December 2011, 11:55 AM   #5
BowlOfRed
Graduate Poster
 
BowlOfRed's Avatar
 
Join Date: Jul 2010
Location: Silicon Valley
Posts: 1,733
Originally Posted by julius View Post
Hello Crossbow,

I know, it's a tough subject. I have also read about modulation (AM, FM, ODFM) and I understand how with AM the amplitude of the wave is used to encode information and with FM the frequeny. So I understand that with either modulation type you can encode a binary data stream.
Traditionally, the encoding is not binary (or digital). If your reference signal is low, you deform the carrier wave a little. If your reference signal is high, you deform the carrier wave a lot. So this is all analog.

Quote:
But if that is how the song is transported, then I am still wondering how the song itself is encoded. How are all the tones that happen simultaneously encoded in a binary datastream that is sent/received/read/decoded sequentially? Or am I all wrong and is this not how it works?
Unless you're on HD radio or satellite, the song is not digitally encoded. Terrestrial AM/FM are analog signals.
BowlOfRed is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 29th December 2011, 12:18 PM   #6
Skeptical Greg
Agave Wine Connoisseur
 
Skeptical Greg's Avatar
 
Join Date: Jul 2002
Location: Just past ' Resume Speed ' .
Posts: 16,257
Here is a Wiki article on AM with a little more detail ..

http://en.wikipedia.org/wiki/Amplitude_modulation
__________________
" The main problem I have with the idea of heaven, is the thought of
spending eternity with most of the people who claim to be going there. "
Skeptical Greg is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 29th December 2011, 12:25 PM   #7
julius
New Blood
 
Join Date: Sep 2010
Posts: 6
Originally Posted by BowlOfRed View Post
The "carrier" wave is at a very high frequency (compared to the waves in the audio portion of the song). And since it's a wave, it's very repetitive. You can predict what the wave "should" look like.

The carrier wave is then deformed in a way corresponding to the signal. So the receiver can see the deformation and turn that back into the information from the song.

This wikipedia image shows an information signal (top) that is used to deform two carrier waves. The first one is deformed in amplitude and the second in frequency.
Ok, I understand how it is the deformation of the signal that carries the information. But how is the actual information content of a song 'encoded' in the wave? I my understanding the receiver of an FM signal sees something like this over time:

3khz
5khz
3khz
100khz
1ghz
20khz

These changes can hold information. This sequence of _electromagnetic_ wave frequencies might describe the changing _sound_ frequency of a single tone over time that is broadcast by some sender. But what if we wanted to send an entire song, which is, at each time the song plays a combination of a lot of tones or frequencies?

The way I look at a song from a data point of view is that it is a large collection of outputs in a frequency range, like what you see on the equalizer on your stereo. The total frequency range might be the entire spectrum of audible sound frequencies and the intensity of each subpart of the entire range would have to be transported, right? So, how is this done?

Wow, I hope my english is good enough to really convey what I mean. I could be all wrong in the basic understanding of how this works. I am a programmer and I tend to look at things in terms of collections, bits, etc.

Last edited by julius; 29th December 2011 at 12:27 PM.
julius is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 29th December 2011, 12:27 PM   #8
sol invictus
Philosopher
 
sol invictus's Avatar
 
Join Date: Oct 2007
Posts: 8,613
Originally Posted by julius View Post
Hello Crossbow,

I know, it's a tough subject. I have also read about modulation (AM, FM, ODFM) and I understand how with AM the amplitude of the wave is used to encode information and with FM the frequeny. So I understand that with either modulation type you can encode a binary data stream.

But if that is how the song is transported, then I am still wondering how the song itself is encoded. How are all the tones that happen simultaneously encoded in a binary datastream that is sent/received/read/decoded sequentially? Or am I all wrong and is this not how it works?

Oh, and book suggestions are always welcome of course.
Actually, at least for AM radio it's really not complicated at all. All sounds (including a song in mono) correspond to a single waveform - the position of a speaker cone as a function of time as it plays, for example, or the voltage at (one channel of) the output of a CD player. So all the radio transmission needs to do is transmit that waveform. That waveform contains lots of frequencies, but forget that - just think of it as a single waveform as a function of time.

AM radio transmits that in a very simple manner, best illustrated by a picture: here for instance. The high frequency of the carrier wave (that's the rapidly oscillating wave in the lower image) ends up being irrelevant. All you hear is the "envelope" - the waveform of the song being transmitted. If you fed that AM signal into a stereo, or even into the metal braces you might have in your teeth, you'd hear the song, because your ears can't hear the megahertz carrier frequency, but they can hear the audio-frequency envelope.

FM is more complex, because it uses frequency modulation and it's stereo.

Last edited by sol invictus; 29th December 2011 at 12:29 PM.
sol invictus is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 29th December 2011, 12:34 PM   #9
BowlOfRed
Graduate Poster
 
BowlOfRed's Avatar
 
Join Date: Jul 2010
Location: Silicon Valley
Posts: 1,733
Originally Posted by julius View Post
Ok, I understand how it is the deformation of the signal that carries the information. But how is the actual information content of a song 'encoded' in the wave? I my understanding the receiver of an FM signal sees something like this over time:

3khz
5khz
3khz
100khz
1ghz
20khz
No, that's not a good way of thinking of it.

A microphone is just a little bladder reading the pressure density in the air. As a tone impacts it, it moves in and out rhythmically. If it's not a pure tone, then it moves in and out in a more complex manner. But it's just moving in and out, and you can locate how far it is at any point in time.

Now you take that motion and you can do something with it. On a record, you can shove a needle to move the sides of a groove up and down. On a carrier wave, you can shove the frequency around.

Yes, you can represent a waveform as the addition of various pure frequencies. But that's not necessary to simply capture the image of the waveform and transmit it.

Quote:
The way I look at a song from a data point of view is that it is a large collection of outputs in a frequency range, like what you see on the equalizer on your stereo. The total frequency range might be the entire spectrum of audible sound frequencies and the intensity of each subpart of the entire range would have to be transported, right? So, how is this done?
I'm suggesting you don't look at it that way. :-) Instead, think of a microphone/speaker position over time. t=0, x=0. t=0.0001, x=1. t=0.0002, x=4, ....

Last edited by BowlOfRed; 29th December 2011 at 12:36 PM.
BowlOfRed is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 29th December 2011, 12:43 PM   #10
sol invictus
Philosopher
 
sol invictus's Avatar
 
Join Date: Oct 2007
Posts: 8,613
Originally Posted by julius View Post
Ok, I understand how it is the deformation of the signal that carries the information. But how is the actual information content of a song 'encoded' in the wave? I my understanding the receiver of an FM signal sees something like this over time:

3khz
5khz
3khz
100khz
1ghz
20khz

These changes can hold information. This sequence of _electromagnetic_ wave frequencies might describe the changing _sound_ frequency of a single tone over time that is broadcast by some sender. But what if we wanted to send an entire song, which is, at each time the song plays a combination of a lot of tones or frequencies?

The way I look at a song from a data point of view is that it is a large collection of outputs in a frequency range, like what you see on the equalizer on your stereo. The total frequency range might be the entire spectrum of audible sound frequencies and the intensity of each subpart of the entire range would have to be transported, right? So, how is this done?

Wow, I hope my english is good enough to really convey what I mean. I could be all wrong in the basic understanding of how this works. I am a programmer and I tend to look at things in terms of collections, bits, etc.
You might be confusing yourself by thinking in the frequency domain. In the time domain, a song is just a function of time. And because radio is a real time or nearly real time process, it's might be easier to understand that way.

If you do want to think in the frequency domain, start with AM. Imagine taking the Fourier transform of (i.e. decompose into frequencies) the lower waveform in the link in my last post. You'd get a big spike at the carrier frequency - which is irrelevant for what you hear - plus a bunch of other frequencies and phases that (when played simultaneously) reproduce the song. So all you need to do at the receiver is filter out the carrier frequency - but that's essentially automatic (the listener's ears will do it on their own, for one thing).
sol invictus is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 29th December 2011, 12:53 PM   #11
julius
New Blood
 
Join Date: Sep 2010
Posts: 6
Ah,things are starting to become more clear now.

I came up with the frequency range, because it seems so... impossible that something as complex as music could be carried by something a simple as a single wave, but that is what actually happens. If I think about it I still find it hard to understand how a single speaker cone, or a microphone can capture the simultaneous drums, bass, guitar and singing of a rock song. All these tones happening at the same time.... hm. But of course, this is what is happening, because in the end my ear drums are no more than an oscillating membrane too.

Last edited by julius; 29th December 2011 at 12:59 PM.
julius is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 29th December 2011, 12:56 PM   #12
Pulvinar
Graduate Poster
 
Join Date: Aug 2009
Posts: 1,405
Originally Posted by julius View Post
But if that is how the song is transported, then I am still wondering how the song itself is encoded. How are all the tones that happen simultaneously encoded in a binary datastream that is sent/received/read/decoded sequentially? Or am I all wrong and is this not how it works?
It's interesting that people nowadays may first think of sound in digital terms.

The digital part is basically just a stream of numbers representing the position of the diaphragm in a microphone (say) as a function of time. The rest is pure analog: different tones are added on top of each other, at their source or in the air. It's the same idea as when you see waves of different sizes added together on an ocean, the lower frequencies being the longer waves. The measurement of the position of the wave at a single point will have just one value at any given instant.

The stream usually consists of two simultaneous measurements, representing the instantaneous position of your two eardrums.
Pulvinar is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 29th December 2011, 01:01 PM   #13
BowlOfRed
Graduate Poster
 
BowlOfRed's Avatar
 
Join Date: Jul 2010
Location: Silicon Valley
Posts: 1,733
Exactly, but it is how it works.

The combined stuff hits your eardrum, and then your hearing system (probably with a lot of help in the cochlea) turns it back into the frequencies.

Here's a little link for Fourier transforms. It's a little math-heavy, but the first pages have some interesting background reading.
http://www.relisoft.com/science/physics/sound.html

And another
http://www.thefouriertransform.com/#introduction
BowlOfRed is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 29th December 2011, 01:10 PM   #14
davefoc
Philosopher
 
davefoc's Avatar
 
Join Date: Jun 2002
Location: orange country, california
Posts: 9,428
I don't think I disagree with anything said above, but some of it seems like it might be skipping past the simple view of what amplitude modulation (AM) is.

The amplitude of the sound wave to be transmitted is used to control the amplitude of the transmitted wave. When the sound level is high the output of the transmitter is correspondingly high, when the sound level is low the output of the transmitter is correspondingly low.

Frequency Modulation (FM) is a slightly more complicated concept. With FM the amplitude of the sound wave to be transmitted is used to control the frequency of the transmitter. When the sound level is high the output frequency of the transmitter is raised and when the sound level is low the output frequency of the transmitter is reduced. The FM receiver converts the variation in frequency of the transmitted signal to variations in amplitude which can then be amplified and used to drive speakers.

Radio can also be used to transmit sounds digitally. In these kind of schemes the sound is encoded as a series 1's and 0's and the encoded data that represents the sound is transmitted. This is the technique used for cell phones, satellite radio and modern digital TV transmission.
__________________
The way of truth is along the path of intellectual sincerity. -- Henry S. Pritchett

Perfection is the enemy of good enough -- Russian proverb
davefoc is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 29th December 2011, 01:14 PM   #15
lomiller
Penultimate Amazing
 
lomiller's Avatar
 
Join Date: Jul 2007
Posts: 10,690
Originally Posted by julius View Post
The way I look at a song from a data point of view is that it is a large collection of outputs in a frequency range, like what you see on the equalizer on your stereo. The total frequency range might be the entire spectrum of audible sound frequencies and the intensity of each subpart of the entire range would have to be transported, right? So, how is this done?
All the frequency components that make up the original sound are modulated onto the carrier and they are all demodulated from the carrier at the receiver.

If you are doing this with a roughly linear analog device you normally donít even need to consider the fact that the sound itself consists of many different frequencies, you modulate the whole range onto the carrier at the transmitter and extract the whole range at the receiver.
__________________
"Anything's possible, but only a few things actually happen"
lomiller is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 29th December 2011, 01:35 PM   #16
julius
New Blood
 
Join Date: Sep 2010
Posts: 6
Originally Posted by Pulvinar View Post
The rest is pure analog: different tones are added on top of each other, at their source or in the air. It's the same idea as when you see waves of different sizes added together on an ocean, the lower frequencies being the longer waves. The measurement of the position of the wave at a single point will have just one value at any given instant.

That is what seems so strange to me, with a single 'value' at every single point in time, then where is the complexity you hear when you listen to a rock song? I am going to describe a situation. Please tell me if I am right or wrong and why.

If, for example, Neil Young would simultaneously hit the snare drum, sing, strike a cord on his guitar and fart than a lot of pressure waves would bump into each other, creating a single resultant wave. That wave could hit the membrane in my ear or a microphone and it might be transported by radio or it might not be transported. Once the pressure waves are combined into a resultant pressure wave, how can my ear (or my brain) decompose this resultant wave into different instruments? Is the oscillation of my ear's membrane and its movements in time so.. complex and diverse that it this oscillation can carry all the richness of sounds I hear when I play a song or when I am simply on the street? It must be very sensitive to minute differences in oscillation to attain this rich ... ehm understanding or sensing of sound.
julius is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 29th December 2011, 01:47 PM   #17
BowlOfRed
Graduate Poster
 
BowlOfRed's Avatar
 
Join Date: Jul 2010
Location: Silicon Valley
Posts: 1,733
Originally Posted by julius View Post
That is what seems so strange to me, with a single 'value' at every single point in time, then where is the complexity you hear when you listen to a rock song?
It's not present in any single "instant". If you take a song or sound source, pure note or complex, and just play a fraction of a second, you won't be able to tell what it is. It will sound like a "beat" with no tone. As you get a smaller and smaller time slice, it really stops having normal "frequencies". So the complexity comes from pulling the frequencies out as you hear it over time.

If you have a sound editor like Audacity, go and grab a 1 second sample and play it back. Then play back shorter sections like a tenth of a second. It starts sounding very odd.

Quote:
I am going to describe a situation. Please tell me if I am right or wrong and why.

If, for example, Neil Young would simultaneously hit the snare drum, sing, strike a cord on his guitar and fart than a lot of pressure waves would bump into each other, creating a single resultant wave. That wave could hit the membrane in my ear or a microphone and it might be transported by radio or it might not be transported. Once the pressure waves are combined into a resultant pressure wave, how can my ear (or my brain) decompose this resultant wave into different instruments?
Because they're acting over time. Over a few milliseconds, the interference changes and the individual frequencies can be teased out by your ear and brain.
BowlOfRed is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 29th December 2011, 01:59 PM   #18
julius
New Blood
 
Join Date: Sep 2010
Posts: 6
This is great. Thanks for all your answers. I have been wondering for the past couple of days how this really works and now I understand.
julius is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 29th December 2011, 02:27 PM   #19
RecoveringYuppy
Philosopher
 
Join Date: Nov 2006
Posts: 9,947
Originally Posted by julius View Post
But how is the actual information content of a song 'encoded' in the wave?
Lot's of good explanations in this thread. But the simplest answer to this question is that the "encoding" method is simple addition. The signal that you care about (the song) is added to the carrier wave. One minor complicating arises in that there are multiple ways two signals can be added together. In AM broadcasting the amplitudes of the two signals are added together. FM is a bit less direct. In FM broadcasting the amplitude of the song is added to the frequency of the carrier.

In both cases you wind up with a broadcast signal that is not a simple single frequency when you're done. It's a multitude of frequencies constantly varying around the frequency of the original carrier frequency.
RecoveringYuppy is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 29th December 2011, 02:29 PM   #20
lomiller
Penultimate Amazing
 
lomiller's Avatar
 
Join Date: Jul 2007
Posts: 10,690
Originally Posted by julius View Post
That is what seems so strange to me, with a single 'value' at every single point in time, then where is the complexity you hear when you listen to a rock song? I am going to describe a situation. Please tell me if I am right or wrong and why.

If, for example, Neil Young would simultaneously hit the snare drum, sing, strike a cord on his guitar and fart than a lot of pressure waves would bump into each other, creating a single resultant wave. That wave could hit the membrane in my ear or a microphone and it might be transported by radio or it might not be transported. Once the pressure waves are combined into a resultant pressure wave, how can my ear (or my brain) decompose this resultant wave into different instruments? Is the oscillation of my ear's membrane and its movements in time so.. complex and diverse that it this oscillation can carry all the richness of sounds I hear when I play a song or when I am simply on the street? It must be very sensitive to minute differences in oscillation to attain this rich ... ehm understanding or sensing of sound.
You can display the resulting sound as separate sine waves, each with itís own frequency and amplitude or add them together to get a more complex signal that has a single discrete value at any given time. Either is just a different way of displaying the same information.

As to why a single sample from that complex waveform doesnít seem like sound, as explained above, it isnít. The music/sound comes from both what it is and how itís changing so you canít look at just one point.

When you digitize sounds, you actually do something like this and look at the value of the complex wave at different points in time, but when you are doing this you need multiple samples. This allows you to understand not just its value at that one time but how itís changing over time. In fact you need more that 2 samples per cycle for the highest frequency you which to capture and itís best to have more than that. IOW if you want to capture voice up to 15 KHz you need to sample more that 30 000 times per second. (AKA oversampling)
__________________
"Anything's possible, but only a few things actually happen"
lomiller is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 29th December 2011, 03:05 PM   #21
JWideman
Graduate Poster
 
JWideman's Avatar
 
Join Date: Dec 2007
Posts: 1,233
I guess it will really blow your mind when you start thinking about how stereo is transmitted over radio, huh.
JWideman is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 29th December 2011, 03:25 PM   #22
ddt
Mafia Penguin
 
ddt's Avatar
 
Join Date: Dec 2007
Location: Netherlands
Posts: 19,576
Originally Posted by julius View Post
If, for example, Neil Young would simultaneously hit the snare drum, sing, strike a cord on his guitar and fart than a lot of pressure waves would bump into each other, creating a single resultant wave.
Indeed, and that resultant wave looks nothing like the sine wave in the example pictures, but is a complex addition of a lot of different sine waves with each different frequencies and amplitudes. That would also happen when Neil Young only hits the snare drum, as each instrument not only produces its base frequency but also harmonics (which have as frequency the integer multiples of the base frequency).

Originally Posted by julius View Post
That wave could hit the membrane in my ear or a microphone and it might be transported by radio or it might not be transported. Once the pressure waves are combined into a resultant pressure wave, how can my ear (or my brain) decompose this resultant wave into different instruments? Is the oscillation of my ear's membrane and its movements in time so.. complex and diverse that it this oscillation can carry all the richness of sounds I hear when I play a song or when I am simply on the street? It must be very sensitive to minute differences in oscillation to attain this rich ... ehm understanding or sensing of sound.
Some devices are sensitive for only certain frequencies, and thus filter out only that part of the complex sound wave that contains the respective frequency(ies). In electronics, think of low-pass filters containing a coil or high-pass filters containing a capacitor. In human anatomy, this also happens in the ear.

Sound enters your outer ear and hits the eardrum, then is transported wholesale through the middle ear by the hammer, anvil and stirrup bones to the inner ear. The fluid in the cochlea of the inner ear basically still vibrates identically to the air outside. The 2.5 spiral of the cochlea contains hair cells along its whole route, and these hair cells each are sensitive to a specific frequency (or rather small range of frequencies) - the high frequencies at the start of the cochlea, the low frequencies at the end - and these hair cells are triggered by the waves in the fluid in the cochlea. Each hair cell is connected to each own nerve cell in the auditory nerve which transmits the signals to the brain, and there the sound is assembled again - or, at least, interpreted in some way.

So you might view the way we pick up sounds as a Fourier transform (in the cochlea) followed by an inverse transform (in the brain).
__________________
"I think it is very beautiful for the poor to accept their lot, to share it with the passion of Christ. I think the world is being much helped by the suffering of the poor people." - "Saint" Teresa, the lying thieving Albanian dwarf

"I think accuracy is important" - Vixen
ddt is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 29th December 2011, 05:11 PM   #23
joesixpack
Illuminator
 
Join Date: Feb 2005
Posts: 4,531
Originally Posted by Pulvinar View Post
It's interesting that people nowadays may first think of sound in digital terms.
...
THis is what struck me about the initial question. I imagine a day when the operation of a phonograph will seem mystifying to people.
__________________
Generally sober 'til noon.
joesixpack is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 30th December 2011, 04:47 PM   #24
julius
New Blood
 
Join Date: Sep 2010
Posts: 6
@ddt: thanks. Very clear.
@joesixpack: I am actually not so young, I just tend to think digitally.

I have a new question that goes a bit further and is about the nature of radio waves (or electromagnetic waves in general).

I have been watching an interesting youtube movie on the production of radio waves (here: youtube.com/watch?v=aAcDM2ypBfE). In all discussions of radio waves, they are explained in the same fashion as mechanical waves, like the ones you see in water. But what are electromagnetic waves really like? Do they have an actual physical wave form? Do the photons they are made of bob up and down along a certain path? In other words: do electromagnetic waves have an amplitude like waves in water (I am guessing they don't, but I'm not sure why)
julius is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 30th December 2011, 05:00 PM   #25
This is The End
 
This is The End's Avatar
 
Join Date: Sep 2007
Posts: 10,906
I have the answer to this and provide the wiki link, but here is a great place to ask and for the info to be!

I remember in vinyl record album days they always made it a big deal if a particular album was in Stereo or not.

How can you get stereo from one needle going down one groove?

http://en.wikipedia.org/wiki/Gramoph...eophonic_sound

Condensed answer:

Quote:
During playback, the movement of a single stylus [needle] tracking the groove is sensed independently eg. by two coils, each mounted diagonally opposite the relevant groove wall.
__________________
________________________
This is The End is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 30th December 2011, 07:18 PM   #26
W.D.Clinger
Illuminator
 
W.D.Clinger's Avatar
 
Join Date: Oct 2009
Posts: 3,663
julius asks some nice questions:

Originally Posted by julius View Post
But what are electromagnetic waves really like? Do they have an actual physical wave form? Do the photons they are made of bob up and down along a certain path?
The photons don't bob up and down. Instead, the electric and magnetic fields vary in a periodic manner that satisfies a certain wave equation that can be derived from Maxwell's equations.

As to whether the electric and magnetic fields are physical, I'd say they're as physical as the gravitational field that makes it difficult for you to jump to the moon. Maybe more so.

Others might say these fields are just mathematical fictions that just happen to provide quantitative descriptions of physical phenomena to a remarkable number of decimal places. Whatever. The point is, these fields do an excellent job of describing what we take to be physical reality.

Originally Posted by julius View Post
In other words: do electromagnetic waves have an amplitude like waves in water (I am guessing they don't, but I'm not sure why)
Electromagnetic waves do have amplitude. You can measure that amplitude using instruments such as field strength meters, magnetometers, and your cell phone (which translates its measurements of the field strength and quality of reception in relevant frequency bands into a simple visual indicator such as up to five bars).
W.D.Clinger is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 30th December 2011, 08:47 PM   #27
shadron
Philosopher
 
shadron's Avatar
 
Join Date: Sep 2005
Posts: 5,918
Originally Posted by julius View Post
That is what seems so strange to me, with a single 'value' at every single point in time, then where is the complexity you hear when you listen to a rock song? I am going to describe a situation. Please tell me if I am right or wrong and why.

If, for example, Neil Young would simultaneously hit the snare drum, sing, strike a cord on his guitar and fart than a lot of pressure waves would bump into each other, creating a single resultant wave. That wave could hit the membrane in my ear or a microphone and it might be transported by radio or it might not be transported. Once the pressure waves are combined into a resultant pressure wave, how can my ear (or my brain) decompose this resultant wave into different instruments? Is the oscillation of my ear's membrane and its movements in time so.. complex and diverse that it this oscillation can carry all the richness of sounds I hear when I play a song or when I am simply on the street? It must be very sensitive to minute differences in oscillation to attain this rich ... ehm understanding or sensing of sound.
The answer to the question is, yes, the complexity of the whole situation is carried to the brain where the sounds are separated, prioritized and sensed both individually and as a whole, in real time. All automatically, apparently hardwired from birth, and yet there is still room for learning more nuance, such as a classical music enthusiast can have while following the second oboist.

An analog to this ability also rests in sight. The eye is sensitive to three colors, but they are not red, green and blue. They are indigo purple, green and yellow-green. Shades of red from yellow to nearly infrared is the difference between the YG and G sensors as a function of the YG, or (YG-G)/YG. Not that the brain actually uses that formula, of course, but something like it, over millions of "pixels" with a 10 ms or less cycle time. We are, of course, not sensitive to that happening, but it explains why single chroma color blindness has results in the way it does.

The rods which sense night vision do so with a pigment that responds in the yellow-green band, which is why red invariably becomes as black as blue does. The brain converts that to gray scale.

Last edited by shadron; 30th December 2011 at 08:53 PM.
shadron is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 30th December 2011, 10:38 PM   #28
WhatRoughBeast
Graduate Poster
 
Join Date: Apr 2011
Posts: 1,427
Originally Posted by julius View Post
I have been watching an interesting youtube movie on the production of radio waves (here: youtube.com/watch?v=aAcDM2ypBfE). In all discussions of radio waves, they are explained in the same fashion as mechanical waves, like the ones you see in water. But what are electromagnetic waves really like? Do they have an actual physical wave form? Do the photons they are made of bob up and down along a certain path? In other words: do electromagnetic waves have an amplitude like waves in water (I am guessing they don't, but I'm not sure why)
julius, I'm afraid you're getting into very deep water here, but let me give you a brief response.

Radio waves (and light waves and x-rays and gamma rays and IR emissions, etc) were clasically analyzed as waves. That is, you can do things like constructive and destructive interference with them. Not only that, they can be shown to have measurable wavelengths and frequencies, and of course they have a propagation velocity: in a vacuum, c.

The first conceptions of EM waves assumed that they were, in fact, waves in something, like waves in air or water. This medium was called the ether, or aether, short for "luminiferous (a)ether". It caused quite a stir when the Michaelson-Morley experiment showed pretty conclusively that there was no such thing. (Note: there are actually systems of physics which have proposed the existence of a locally modified ether, but let's not muddy the water too much.)

Another development was Maxwell's equations, which gave a solution for EM waves which consist of intertwined oscillating electric and magnetic waves propagating in lockstep; hence the word "electromagnetic". This turned out to be extaordinarily useful in describing such critters, and all seemed to be well.

Well, sort of. A number of quite interesting experiments were discovered, in which EM waves (most generally light waves) did not in fact behave like waves, but rather more like particles. Sort of. An obscure Swiss patent clerk (named A. Einstein, of whom you may have heard) came up with a way to usefully explain how this could account for the behavior of light on certain materials, called the photoelectric effect. As a result of this and a few other very entertaining papers he became much less obscure. From there various boffins went on to invent bizarre things like quantum mechanics.

To make a long story short, or at least shorter, you don't describe light as waving photons. Usually you choose waves (of a certain frequency/wavelength and amplitude) or photons (of a certain energy and momentum) depending on exactly what sort of interaction you will be measuring.

I hope this helps, and that others on the forum will not slam me too hard for making egregious simplifications.

Last edited by WhatRoughBeast; 30th December 2011 at 10:43 PM.
WhatRoughBeast is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 31st December 2011, 01:56 AM   #29
ddt
Mafia Penguin
 
ddt's Avatar
 
Join Date: Dec 2007
Location: Netherlands
Posts: 19,576
Originally Posted by shadron View Post
The answer to the question is, yes, the complexity of the whole situation is carried to the brain where the sounds are separated, prioritized and sensed both individually and as a whole, in real time. All automatically, apparently hardwired from birth, and yet there is still room for learning more nuance, such as a classical music enthusiast can have while following the second oboist.
The discrimination in frequencies already occurs in the inner ear, in the cochlea. There is evidence that the (outer) hair cells themselves are "tuned" to a specific frequency, that the arrangement of the hairs of the hair cells discriminates for frequency, and that the basilar membrane, on which the hair cells reside, also discriminates for frequency along its length. There are some 30,000 nerve endings in the cochlea, however, these are not easily mapped 1-1 onto the hair cells, it seems, so it's even more complex (and as of yet poorly understood).

Britannica article with detailed info (section "Transmission of sound within the inner ear")
Page with picture of organization of hair cells
__________________
"I think it is very beautiful for the poor to accept their lot, to share it with the passion of Christ. I think the world is being much helped by the suffering of the poor people." - "Saint" Teresa, the lying thieving Albanian dwarf

"I think accuracy is important" - Vixen
ddt is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 31st December 2011, 03:30 AM   #30
julius
New Blood
 
Join Date: Sep 2010
Posts: 6
Clinger/WhatRoughBeast: I was trying to wrap my head around the question "what electromagnetic waves look like", but I see that is not so simple. I have been reading on Wikipedia about the particle-wave duality that you also describe.

So, the way I understand it is that the wavy lines that are often used to represent waves (either electromagnetic or mechanical) are just a convenient way of looking at them, because you can easily explain concepts like wavelength, frequency and amplitude with them. They are a model.

And, would it be correct to say that the amplitude of an electromagnetic wave corresponds to the number of photons arriving (particle view) at a location, like the antenna of your cell phone?

Last edited by julius; 31st December 2011 at 03:32 AM.
julius is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 31st December 2011, 04:24 AM   #31
Mr. Scott
Under the Amazing One's Wing
 
Mr. Scott's Avatar
 
Join Date: Nov 2005
Posts: 2,546
Here's how I explain radio (simplified):

1) Any sound or music, no matter how complex, is only air pressure changing over time.

2) A microphone changes these variations of air pressure into variations in electrical voltage and current. More air pressure, more positive current, less air pressure, more negative current.

3) A radio carrier frequency, let's say 1 megahertz (current alternating a million times a second) is varied in strength instantaneously by the audio current from #2 above by an amplitude modulating AM electrical circuit.

4) A transmitting antenna converts the carrier frequency generated by #3 into electromagnetic waves that radiate into the space around the antenna.

5) An antenna in your radio receiver converts this wave in space generated by #4 into electrical current, still alternating at 1 megahertz.

6) The variations in strength of the alternating current from #5 are extracted and isolated by a rectifying and integrating circuit, which regenerates the audio current.

7) The audio current is then sent to the loudspeaker in your radio, which converts this current into vibrations of the speaker diaphram, which varies the air pressure matching the air pressure changes of the original sound source.

8) The air pressure changes reach your ear -- the same sequence of changes picked up by the microphone in #1, and you hear what you would have heard if your ear was where the microphone was.

You got that?

In FM radio, the frequency of the carrier wave is varied (like vibrato) instead of the amplitude.

I skipped amplification, filtering, and frequency conversion steps, just to make the principles clear.

Hope this helps!

Ask me how analogue color television and SQ quadraphonic sound were transmitted, each by a single radio wave, or recorded on a rotating vinyl disk by a single vibrating needle (yes, color video too).

Last edited by Mr. Scott; 31st December 2011 at 04:37 AM.
Mr. Scott is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 31st December 2011, 07:15 AM   #32
W.D.Clinger
Illuminator
 
W.D.Clinger's Avatar
 
Join Date: Oct 2009
Posts: 3,663
Originally Posted by julius View Post
And, would it be correct to say that the amplitude of an electromagnetic wave corresponds to the number of photons arriving (particle view) at a location, like the antenna of your cell phone?
There's a correspondence, yes, but it's a bit complicated.

Radio technology preceded quantum mechanics, and can be understood pretty well without talking about photons and other quantum mechanical stuff.

There are several different ways to define the amplitude of a radio wave, and they aren't all equivalent. In my limited experience, the root mean square definition of amplitude is most important because it corresponds most closely to power. Peak-to-peak amplitude is also important because it's closely related to a form of distortion known as clipping.
W.D.Clinger is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 31st December 2011, 11:02 AM   #33
WhatRoughBeast
Graduate Poster
 
Join Date: Apr 2011
Posts: 1,427
Originally Posted by julius View Post
And, would it be correct to say that the amplitude of an electromagnetic wave corresponds to the number of photons arriving (particle view) at a location, like the antenna of your cell phone?
Yes. The power of an EM wave over an area is identical to the number of photons (with the appropriate energy) hitting that area per unit time.

However, the photon model is more useful, for instance, when talking about a detector looking at very low amplitude EM waves. For this instance, the detector may reliably produce a low photon count while the wave model insists that there is not enough energy available to trigger the detector. Plus, the detector will show a peculiar inability to detect long-wavelength EM waves even when there is plenty of power available. (See the photoelectric effect.)

Likewise, you can talk about tuning a radio antenna's length to the wavelength of the incoming EM waves to get greater or less signal strength, and this just doesn't make a lot of sense if you're thinking of the signal as a flux of photons.

The point is that your comment about models is correct. How to most usefully think about radio waves is determined by exactly what you want to do with them.
WhatRoughBeast is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 31st December 2011, 11:10 AM   #34
jj
Penultimate Amazing
 
Join Date: Oct 2001
Posts: 21,381
Originally Posted by julius View Post
Hello Crossbow,

I know, it's a tough subject. I have also read about modulation (AM, FM, ODFM) and I understand how with AM the amplitude of the wave is used to encode information and with FM the frequeny. So I understand that with either modulation type you can encode a binary data stream.

But if that is how the song is transported, then I am still wondering how the song itself is encoded. How are all the tones that happen simultaneously encoded in a binary datastream that is sent/received/read/decoded sequentially? Or am I all wrong and is this not how it works?

Oh, and book suggestions are always welcome of course.
First, AM and FM as typically used are analog systems, that carry the information (music, etc) directly as the electrical waveform (of course modulated properly). There is, strictly speaking, no "encoding" beyond the modulation method.

Now, digital signals are (nearly always) captured first as "PCM" which stands for "Pulse Coded Modulation". This gives you a very accurate, high-bit-rate digital copy of the signal.

You can find some free stuff about this in the "Conversion:..." slide deck at www.aes.org/sections/pnw/ppt.htm for one source.

For typical kinds of encoding beyond that, the same web site has a "perceptual coding tutorial" that explains the basics behind things like MP3, AAC, and the like.

For a book on basic signal processing, it's hard to beat Rabiner and Gold, but that's been out of print forever and a day.

"Fourier Analysis" by Morrison (Wiley Interscience) will explain the duality between "tones" or "frequencies" and time domain waveform to more or less any degree you might want to know.

"Understanding Digital Signal Processing" by Richard Lyons is a good book for people who have an engineering background but not much in the way of communications theory (that's AM, FM, SSB, ODFM, QPSK, etc) or signal processing.

The classic digital communications text is probably not terribly accessible without a lecturer to go along, sorry.
jj is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 31st December 2011, 11:20 AM   #35
jj
Penultimate Amazing
 
Join Date: Oct 2001
Posts: 21,381
Originally Posted by RecoveringYuppy View Post
Lot's of good explanations in this thread. But the simplest answer to this question is that the "encoding" method is simple addition. The signal that you care about (the song) is added to the carrier wave. One minor complicating arises in that there are multiple ways two signals can be added together. In AM broadcasting the amplitudes of the two signals are added together. FM is a bit less direct. In FM broadcasting the amplitude of the song is added to the frequency of the carrier.

In both cases you wind up with a broadcast signal that is not a simple single frequency when you're done. It's a multitude of frequencies constantly varying around the frequency of the original carrier frequency.
"addition" is very likely to confuse things here.

In AM, the carrier (radio frequency) is simply multiplied by the audio signal, in the form of carrier * (.5 + .5 times audio signal) (where audio signal never goes beyond +-1). Interestingly, the information power present in the signal never exceeds half the zero-signal carrier power.

In DSB (double sideband) the modulation is carrier * audio signal. No signal means no carrier. In SSb (single sideband) one sideband is then removed. This takes away no information. In both, the information power is the whole signal.

In FM, the frequency is modulated by the audio signal, (well, actually the phase is modulated, which is the same thing). This creates a signal with much wider bandwidth than the audio signal (bandwidth meaning highest minus lowest frequency it occupies). This additional bandwidth provides redundancy, hence FM is less noisy than AM. These are all called "analog" systems, because the system is neither quantized in time or frequency. A necessary characteristic of such a system is that every copy must add noise. There is no if, and, or but.

So called 'digital' signals are a quantized, sampled analog of the original signal. Because they are quantized both in amplitude and time of occurance, it is possible to completely remove small errors. When these numbers (which is what you get from quantization) are converted to bits, it is possible to remove large errors, up to a given point, at which everything falls totally apart, thud, crunch. Claude Shannon came up with a theorem (Shannon's bound) that shows when this MUST happen. Real systems are always a bit worse, since to get to the bound you must use infinite time...

Digital signals, however, take up a much wider bandwidth. An audio signal may be 20-20kHz, two channels, but a CD uses a 1.4114 megabit/second bit rate to represent that. Using a modem may reduce the bit rate, and will also reduce the reliablity. You gets what you pays for.
jj is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 31st December 2011, 12:13 PM   #36
Mr. Scott
Under the Amazing One's Wing
 
Mr. Scott's Avatar
 
Join Date: Nov 2005
Posts: 2,546
I realized I can simplify the explanation even more.

For AM radio, the strength of the radio carrier wave from the transmitter becomes greater and lesser as the air pressure picked up by the microphone became greater or lesser.

For FM radio, the frequency of the radio carrier wave becomes slightly greater or lesser like the air pressure picked up by the microphone.

The radio receiver translates the carrier wave strength or frequency variation back into air pressure changes.

When you get into multiplex (stereo and surrond sound) or video, or digital, it gets really complicated really fast. Still, what goes through space is an electromagnetic carrier wave that varies in strength, frequency, and/or phase that encodes the information transmitted.
Mr. Scott is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 31st December 2011, 12:17 PM   #37
Mr. Scott
Under the Amazing One's Wing
 
Mr. Scott's Avatar
 
Join Date: Nov 2005
Posts: 2,546
How many of you remember the audio loops in school classrooms when you were little?

That's the most basic form of radio -- no carrier wave. The speaker level output of a record player was sent to a loop of metal foil stuck to the walls of the classroom, circling the classroom once. A receiver that was like a walkman picked up the electromagnetic audio waves in its own coil and amplified it to headphones.
Mr. Scott is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 31st December 2011, 01:19 PM   #38
ben m
Guest
 
Join Date: Jul 2006
Posts: 6,387
Originally Posted by Mr. Scott View Post
How many of you remember the audio loops in school classrooms when you were little?

That's the most basic form of radio -- no carrier wave. The speaker level output of a record player was sent to a loop of metal foil stuck to the walls of the classroom, circling the classroom once. A receiver that was like a walkman picked up the electromagnetic audio waves in its own coil and amplified it to headphones.
I've never heard of this. But maybe that's because I'm a relative whippersnapper. Is that a science-lab experiment, or a primitive PA system, or what?
ben m is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 31st December 2011, 11:25 PM   #39
Mr. Scott
Under the Amazing One's Wing
 
Mr. Scott's Avatar
 
Join Date: Nov 2005
Posts: 2,546
Originally Posted by ben m View Post
I've never heard of this. But maybe that's because I'm a relative whippersnapper. Is that a science-lab experiment, or a primitive PA system, or what?
My school was in an academic town where new products and concepts were tested.

Difficult to google this one, but I'm learning they were called "induction loops" and may have been mainly targeted for hearing impaired students.

Quote:
Induction loops were often fitted, but proved to be very unsatisfactory in use, in special schools in the 1960's and 1970's and then fell into disuse. The classroom loop system was replaced by personal fm systems such as our fmGenie radio aid system, Phonak Microlink or Phonic Ear, which are widely used in education today.
Oh, it seems they are still commercially available
Mr. Scott is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 1st January 2012, 12:05 AM   #40
psionl0
Skeptical about skeptics
 
psionl0's Avatar
 
Join Date: Sep 2010
Location: 31į57'S 115į57'E
Posts: 16,395
Originally Posted by julius View Post
I have been watching an interesting youtube movie on the production of radio waves (here: youtube.com/watch?v=aAcDM2ypBfE). In all discussions of radio waves, they are explained in the same fashion as mechanical waves, like the ones you see in water. But what are electromagnetic waves really like? Do they have an actual physical wave form? Do the photons they are made of bob up and down along a certain path? In other words: do electromagnetic waves have an amplitude like waves in water (I am guessing they don't, but I'm not sure why)
You have raised three separate questions in this thread:
  • How sound is converted into an electrical signal and vice versa
  • How such an electrical signal can modulate a carrier wave
  • The nature of the carrier wave
With regards to the latter, radio waves are like other forms of light. Sometimes it is better to treat them as particles to explain their behaviour and sometimes it is better to regard them as waves. Since radio waves are usually generated by some oscillator circuit that creates an alternating EMF, they are usually considered as waves rather than particles.

As always, light is a mysterious thing.
psionl0 is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Reply

International Skeptics Forum » General Topics » Science, Mathematics, Medicine, and Technology

Bookmarks

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump


All times are GMT -7. The time now is 02:00 AM.
Powered by vBulletin. Copyright ©2000 - 2020, Jelsoft Enterprises Ltd.

This forum began as part of the James Randi Education Foundation (JREF). However, the forum now exists as
an independent entity with no affiliation with or endorsement by the JREF, including the section in reference to "JREF" topics.

Disclaimer: Messages posted in the Forum are solely the opinion of their authors.