
Welcome to the International Skeptics Forum, where we discuss skepticism, critical thinking, the paranormal and science in a friendly but lively way. You are currently viewing the forum as a guest, which means you are missing out on discussing matters that are of interest to you. Please consider registering so you can gain full use of the forum features and interact with other Members. Registration is simple, fast and free! Click here to register today. 
12th April 2006, 07:25 AM  #1 
Banned
Join Date: Feb 2004
Posts: 7,675

Help needed with Excel.
I only started using spreadsheets a few months ago so very new at this. Also I know virtually nothing about mathematics.
Anyway I have thse values whereby y varies with x. I want to find a precise mathematical relationship between x and y so that if I input any x value I get the y value displayed. The values are these (2.2,1.6), (2.4,1.75), (2.6,1.92), (2.8,2.12), (3, 2.35) and (3.2,2.61). In each case x is the first number and y the second. These y values don't simply increase in value unformly but increase at a very slightly increasing rate. So how do I obtain a mathematical expression? equation? relating the two sets of numbers? I tried the chart function thingy last night and obtained a graph of these points. I was hoping that it might supply me with the relationship between these 2 sets of numbers. So I was messing about and discovered something called "add trendline". So I tried that. Experimented and found the expontial trendline gave the best fit. I ticked the R squared value and equation options. I've got on the chart R^2 = 0.999 and y = 0.5405*e^0.49x So it seems that it is giving me this highly obscure relationship between y and x I didn't know what "e" meant, but I have a vague feeling it's simply a number like pi with an infinite number of digits after the decimal point. I looked it up on google and some site said it's value is 2.71828. Unfortunately when I tried it, it gave incorrect values . For example when I put x as 3.2 in the equation it gave an answer of 2.82 when it is supposed to be 2.61 (and 2.82 is way above the trend line). So what gives here? 
12th April 2006, 07:44 AM  #2 
Penultimate Amazing
Join Date: Aug 2001
Posts: 42,367

Here's how:
Turn off your computer. Don't turn it on again. Ever. 
12th April 2006, 07:45 AM  #3 
Muse
Join Date: Mar 2006
Posts: 608


12th April 2006, 08:11 AM  #4 
Banned
Join Date: Feb 2004
Posts: 7,675

I don't believe it! I just spent the last hour trying to figure it out, and adding more digits for "e" and right clicking the equation and expanding the digits after the decimal points there, and all the time I had neglected to put those brackets in!
BTW is it not possible just to put e directly in rather than some approximation like 2.71828? I tried putting =0.5405*e^(0.49*Input!B10) but it just said "name". (Input!B10 is the cell where I put the value of x in). 
12th April 2006, 08:13 AM  #5 
Penultimate Amazing
Join Date: Jun 2003
Posts: 47,281

Yes, it is a special irrational number like pi. In fact, the two happen to be intimately related  you can kind of think of e being the "special" number for exponential functions the same way that pi is the "special" number for trigonomic functions. Here's two little equations which may pique your interest in e:
Seems almost too simple, doesn't it? BTW, since you say you don't know much math, the ! notation means factorial. It's defined like this: and so on. Factorials get big very fast. The connection to pi comes from plugging in imaginary numbers to these equations, from which one can prove: 
__________________
"As long as it is admitted that the law may be diverted from its true purpose  that it may violate property instead of protecting it  then everyone will want to participate in making the law, either to protect himself against plunder or to use it for plunder. Political questions will always be prejudicial, dominant, and allabsorbing. There will be fighting at the door of the Legislative Palace, and the struggle within will be no less furious."  Bastiat, The Law 

12th April 2006, 08:20 AM  #6 
Thinker
Join Date: Jun 2005
Posts: 230


12th April 2006, 08:28 AM  #7 
Thinker
Join Date: Mar 2002
Posts: 161

A shorter exponential regression equation would simply calculate e ^ 0.49 and replace it to result in y = 1.6323 ^ x = (1 + 0.6323) ^ x. Most people are introduced/farmiliar with this as "Interest = Principal * (1 + r) ^ n" where r is the percentage increase (interest rate per period, when compounded more than once a year).
A relationship such as this is not wholly obscure and using base 1.6323 instead of e ^ x form indicates that for every full unit of increase the value of x (e.g., from 2 to 3) the value of y will increase by approximately 63%. Whether or not your data or results are applicable at values previous or post your recorded data is another matter as you may only be allowed to interpolate (use the equation for values between your highest and lowest x values) as extrapolation (using your equation to predict values before and after your recorded x values) poses many other questions and concerns and chances for error in approximation. This is probably more detail than necessary but I'm in the library bored and thought I'd attempt to help (I typed this in between teaching classes so hopefully there are not m/any mistakes. Please let me know if you find some. Either way, considering yourself a 'beginner' on Excel and being able to do what you did is EXCELLENT. Good Luck! 
12th April 2006, 08:30 AM  #8 
Thinker
Join Date: Jun 2005
Posts: 230


12th April 2006, 08:34 AM  #9 
Graduate Poster
Join Date: Dec 2005
Posts: 1,463

I did some quick calculations for you.
I came up with a correlation coefficient of 0.9947 which made my r^2 = 0.98942809. The leastsquares line I came up with was y = 1.0071x  0.6609 Although, this line didn’t produce the exact values like you asked for, but working with statistics this is expected. The value for x = 3.2 returned y = 2.56182 which is a little closer than the 2.82 you had earlier. The closest value was for x = 2.4, which gave me y = 1.75614. … Hmm, seems that while I was working the problem the old fashion way I learned in statistics class (some paper and a calculator), some of you have already produced some better answers for him. 
__________________
“There is perhaps no better demonstration of the folly of human conceits than this distant image of our tiny world.”  Carl Sagan “The fact that we live at the bottom of a deep gravity well, on the surface of a gas covered planet going around a nuclear fireball ninety million miles away and think this to be normal is obviously some indication of how skewed our perspective tends to be.” – Douglas Adams 

12th April 2006, 08:53 AM  #10 
Banned
Join Date: Feb 2004
Posts: 7,675

Well the figures are to do with prematch probabilities of soccer matches resulting in less than 2.5 goals i.e either 0, 1 or 2 goals.
Thus the values 2.2, 2.4 etc represent the average expected number of goals there will be in the match. The values 1.6, 1.75 etc, represent the probabilities. Thus with an expected average of 2.2 goals there is a 1.6 average expected chance that there will be less than 2.5 goals in the entire match. Now, in practice, in professional soccer, the expected average will never dip below 1.9 goals in the match, nor will it exceed 3.4 (but of course there are many 00 and 32 scorelines etc  it's the expected average we're talking about here). So I imagine that extrapolating above or below those values to any significant extent will not be applicable 
12th April 2006, 08:59 AM  #11 
Banned
Join Date: Feb 2004
Posts: 7,675

And BTW, I much appreciate everyones help! Thanks!

12th April 2006, 09:08 AM  #12 
Muse
Join Date: May 2005
Posts: 849


12th April 2006, 10:35 AM  #13 
Graduate Poster
Join Date: Jun 2005
Posts: 1,188

Quote:
LLH 
12th April 2006, 10:55 AM  #14 
Ardent Formulist
Join Date: Jun 2005
Location: Austin, TX
Posts: 15,334


__________________
To understand recursion, you must first understand recursion. Woo's razor: Never attribute to stupidity that which can be adequately explained by aliens. 

12th April 2006, 11:00 AM  #15 
Banned
Join Date: Feb 2004
Posts: 7,675


12th April 2006, 11:06 AM  #16 
Penultimate Amazing
Join Date: Aug 2001
Posts: 23,676


__________________
Science is selfcorrecting. Woo is selfcontradicting. 

12th April 2006, 11:16 AM  #17 
Illuminator
Join Date: Dec 2005
Posts: 4,701

And now a little asside on the number e.
2.718281828459045.. is actualy quite easy to remember once someone points out the pattern. My grade 10 math teacher pointed it out and I have not forgotten yet.. 21 years later!! 2.7 1828 1828 459045 
__________________
100% Cannuck! 

12th April 2006, 11:20 AM  #18 
Banned
Join Date: Feb 2004
Posts: 7,675

Who's comment? Mine or his? Whether his comment is serious or not, I have absolutely no idea what he means. I have never encountered the word "floobie". Even if I had I cannot imagine that it would make sense in the context of what he wrote. It's just words strung together meaninglessly so far as I can ascertain. So what's the joke?

12th April 2006, 11:23 AM  #19 
TeaTime toad
Join Date: Mar 2005
Posts: 15,516


12th April 2006, 11:33 AM  #20 
Banned
Join Date: Feb 2004
Posts: 7,675


12th April 2006, 11:37 AM  #21 
TeaTime toad
Join Date: Mar 2005
Posts: 15,516


12th April 2006, 12:21 PM  #22 
Illuminator
Join Date: Dec 2005
Posts: 4,701

LLH is the one who needs to 'get over it'. It's all fine and dandy to form an opinion about someone. However, when that opinion leaves you unable to differentiate between valid questions or discussions and nonsense, then it is you who should be shuned as much as any woo.
The pettiness of the socalled enlightened one's on these forums is quite pathetic. 
__________________
100% Cannuck! 

12th April 2006, 12:28 PM  #23 
Graduate Poster
Join Date: Jun 2005
Posts: 1,188


12th April 2006, 03:40 PM  #24 
Banned
Join Date: Feb 2004
Posts: 7,675


12th April 2006, 06:16 PM  #25 
Ardent Formulist
Join Date: Jun 2005
Location: Austin, TX
Posts: 15,334

Same here. I confess I hadn't actually read Ian's post thoroughly before spewing my drink at the response. Something about the use of whimsical emoticons in the middle of a serious discussion of mathematics and computer software just seemed to be begging for a crass response...

__________________
To understand recursion, you must first understand recursion. Woo's razor: Never attribute to stupidity that which can be adequately explained by aliens. 

12th April 2006, 11:51 PM  #26 
Graduate Poster
Join Date: Jun 2005
Posts: 1,188


14th April 2006, 03:32 PM  #27 
Graduate Poster
Join Date: Apr 2004
Location: Yorkshire
Posts: 1,180

Ian, I’m baffled by your account of the soccer match data, for 2 reasons:
1) Your Yvalues aren’t probabilities (and they go the wrong way). 2) Whatever they are, I suppose the Yvalues were derived from the Xvalues, so I don’t understand why you want to find the relation by regression analysis. You can’t recreate a defined relationship from the numbers, unless you know the form of the equation and just want to calculate the coefficients (and even then you will have rounding errors). Please give more details! 
14th April 2006, 05:12 PM  #28 
Penultimate Amazing
Join Date: Aug 2001
Posts: 23,676


__________________
Science is selfcorrecting. Woo is selfcontradicting. 

14th April 2006, 07:36 PM  #29 
Banned
Join Date: Feb 2004
Posts: 7,675

They are probabilities. Perhaps it would help if I pasted in a couple of the tables here.
(Oops, just finished my post and discovered that I'm unable to paste in properly. Why the hell can't we paste something simple like a table!) HOW MANY GOALS WILL THERE BE? PREMATCH EXPECTATION: 2.2 So far 0 So far 1 So far 2 MINUTE Under 2.5 Over 2.5 Under 2.5 Over 2.5 Under 2.5 Over 2.5 1.00 1.60 2.66     6.00 1.55 2.83 2.64 1.61 7.95 1.14 11.00 1.49 3.05 2.48 1.68 7.25 1.16 16.00 1.43 3.33 2.32 1.76 6.57 1.18 21.00 1.37 3.67 2.17 1.86 5.94 1.20 26.00 1.32 4.10 2.02 1.98 5.35 1.23 31.00 1.27 4.65 1.89 2.12 4.81 1.26 36.00 1.23 5.35 1.77 2.31 4.31 1.30 41.00 1.19 6.28 1.65 2.54 3.86 1.35 46.00 1.15 7.86 1.53 2.90 3.37 1.42 51.00 1.12 9.70 1.44 3.30 3.01 1.50 56.00 1.09 12.60 1.35 3.88 2.67 1.60 61.00 1.06 17.34 1.27 4.75 2.36 1.74 66.00 1.04 25.73 1.20 6.12 2.07 1.93 71.00 1.02 42.35 1.13 8.52 1.82 2.22 76.00 1.01 81.58 1.08 13.37 1.59 2.68 81.00 1.00 205.11 1.04 26.21 1.39 3.55 86.00 1.00 895.32 1.01 90.78 1.21 5.66 PREMATCH EXPECTATION: 3.2 So far 0 So far 1 So far 2 MINUTE Under 2.5 Over 2.5 Under 2.5 Over 2.5 Under 2.5 Over 2.5 1.00 2.61 1.62     6.00 2.44 1.69 5.24 1.24 20.13 1.05 11.00 2.27 1.79 4.73 1.27 17.61 1.06 16.00 2.11 1.90 4.25 1.31 15.29 1.07 21.00 1.96 2.04 3.81 1.36 13.20 1.08 26.00 1.82 2.22 3.40 1.42 11.34 1.10 31.00 1.69 2.45 3.04 1.49 9.71 1.11 36.00 1.57 2.74 2.72 1.58 8.29 1.14 41.00 1.47 3.12 2.43 1.70 7.06 1.16 46.00 1.36 3.76 2.13 1.88 5.81 1.21 51.00 1.29 4.49 1.92 2.08 4.94 1.25 56.00 1.22 5.63 1.73 2.38 4.15 1.32 61.00 1.15 7.47 1.55 2.81 3.47 1.41 66.00 1.10 10.65 1.40 3.49 2.88 1.53 71.00 1.06 16.83 1.27 4.66 2.38 1.72 76.00 1.03 31.11 1.17 6.94 1.97 2.03 81.00 1.01 75.15 1.09 12.63 1.62 2.62 86.00 1.00 317.69 1.03 36.85 1.33 4.06 I have 6 such tables which I copied from a book (the definitive guide to betting exchanges). These are the probabilities that the author reckons hold for there either being less or more than 2.5 goals in a football (soccer) match as the match progresses. It lists the probabilities for there being so far 0, 1 or 2 goals. The 6 tables represent the prematch expectation of 2.2, 2.4, 2.6, 2.8, 3.0, and 3.2 Goals of which I've just pasted in the first and last. The figures I quoted in my opening post are from the first minute (i.e right at the beginning of the match) for there being under 2.5 goals as the prematch expectation of goals increases from 2.2 to 3.2 goals. Now the greater number of goals we expect on average before the match, the less probable that there will be less than 2.5 goals. That is why y decreases. What I'm going to do is to have just one table, and when I enter the prematch goal expectation in a cell it will tell me all the probabilities for getting less or more than 2.5 goals as the match progresses. I needed to find the relationship between the probability for under and over 2.5 goals for each 5 minute interval in the game as the prematch goal expectation increases. That way I can enter any value into the cell eg a prematch expectation of 2.7 goals, and obtain all the appropriate probabilities (the mathematical relationship I obtained told me the probability for a table generated by inputting a prematch goal expectation wasn't simply half way between the values in a 2.6 and 2.8 tables). I also intend to generate tables for the probabilities for there being under and over 1.5 goals as the match progresses, and the same goes for under and over 3.5 goals, under and over 4.5 goals, under and over 5.5 goals and under and over 6.5 goals. Why am I doing all this? I expect everyone has guessed. It is for the purposes of gambling on the total number of goals in football matches whilst a match progresses.
Quote:
Quote:
Anyway I've provide much more detail, so hopefully you should understand what I'm doing. 
14th April 2006, 07:42 PM  #30 
Banned
Join Date: Feb 2004
Posts: 7,675

What little mathematics I've done I was much better at than any of my fellow pupils at school. Indeed in my 4th year exam when I was 14 I got the highest mark ever in the history of the school. But I never did any more mathematics after 16, and I've never done any statistics whatsoever (and certainly have never done any calculus and the like).

14th April 2006, 07:56 PM  #31 
Banned
Join Date: Feb 2004
Posts: 7,675


14th April 2006, 10:33 PM  #32 
Graduate Poster
Join Date: Dec 2005
Posts: 1,463

In statistics, the requirements of probability are:
1. The probability of any outcome must lie between zero and one. 2. The sum of the probabilities of all the outcomes in a sample must be one. So the 1.6 you had, being equal to 62.5%, should be written as .625 if you want to use it as a statistical probability. Also, if you add each of your Ys together as statistical probabilities then you are well over 1, and this means that you can’t use your set of Ys as statistical probabilities. However, this does invalidate what you are attempting to accomplish, I just think you have your labels wrong. You can still use your data as graph plots and uses statistics to measure the linear relationship. You have to remember though, that once you start using statistical inference, you’ll no longer be able to get exact answers. In statistics, when you have a scatter plot graph, like the one your data makes, you can use the data to make the least squares line. This is a line drawn through the points so that the sum of the squared deviations between the points and the line is minimized. This line won’t give you exact answers though; just ones close to the actual answer within reasonable probability. Imagine that on every point of that line there is a bell curve that represents the probably of the actual point being a certain distance from your line. I calculated such a line here for you. 
__________________
“There is perhaps no better demonstration of the folly of human conceits than this distant image of our tiny world.”  Carl Sagan “The fact that we live at the bottom of a deep gravity well, on the surface of a gas covered planet going around a nuclear fireball ninety million miles away and think this to be normal is obviously some indication of how skewed our perspective tends to be.” – Douglas Adams 

15th April 2006, 12:53 AM  #33 
Illuminator
Join Date: May 2004
Posts: 4,787

Originally Posted by Interesting Ian
Quote:
pi()=pi The pi function is what's known as a "null function"; it only has one value, so you don't have to enter any input. Other Excel null functions include: today's date: today() date and time: now() random number generator: rand()
Originally Posted by Math Maniac
Quote:
Quote:
Originally Posted by Unnamed
Originally Posted by Interesting Ian
Quote:
Originally Posted by Molinaro
Quote:
Originally Posted by Lucky
Quote:
Originally Posted by Interesting Ian
Quote:
Quote:
Quote:
Quote:
score nothing ((1p)^2) score in first five minute period, but not in second (p(1p)) score in second five minute period, but not in first ((p1)p) score in both (p^2) So the total probability is the sum of the last three: p((22p)+p)=p(2p) Notice that the last one doesn't count for any more than the middle two; the extra goal(s) are "wasted". You therefore get less than 2p. If you were to count the last one twice because there are twice as many goals, you would get 2p, which is what you (incorrectly) expected. Furthermore, if it were linear, then there would be a point at which the probability is more than 100%, which is absurd, isn't it?
Quote:

15th April 2006, 03:31 AM  #34 
Banned
Join Date: Feb 2004
Posts: 7,675

Quote:
Quote:
No, what should add up to 100% is that for any given prematch expectation of goals the respective probabilities for there being under 2.5 goals and over 2.5 goals, for any given number of minutes into the game, should add up to 100. Thus let's look at the 2.2 goals table. At minute one the figures I have are 1.60 for under 2.5 goals and 2.66 for over 2.5 goals. So dividing each of those figures into 100 and adding them together should equal 1.
Quote:
Quote:
We're only talking about one line here i.e how the probability changes for scoring under 2.5 goals at minute 1 as the prematch goal expectation varies. I have to do this for 5 mins, 10 mins, up to 85 mins. Then I've got to do it for over 2.5 goals. Then I've got to repeat all that for figuring the over and under 1.5 goal markets, the over and under 3.5, 4.5, 5.5 and 6.5 markets. So a huge amount of work. But I can do it all now. The only problem I'm facing is that the probabilities when I get to 65 mins are insufficiently accurate. But I can work my way around that. 
15th April 2006, 04:37 AM  #35 
Banned
Join Date: Feb 2004
Posts: 7,675

Quote:
Quote:
Quote:
Quote:
Quote:
Quote:
Quote:
Here are the figures for getting less than 2.5 goals with a prematch expectation of average of 2.2 goals: 62.5% FT 87% HT (these figures of course assume no goal is scored in the first half. Otherwise the probabilities would be very different!) So if no goal is scored come half time, the probability that the total number of goals scored will be less than 2.5 goals will increase from 62.5% at the beginning of the match, to 87% at half time. The figure I get from applying the poisson distribution to 1.1 for whole game is 80.1%. So that's 7% less. This might seem a bit strange because if the prematch expectation is 2.2, and goals and the chance of a goal being scored doesn't vary as the match progresses, then the expectation at half time should also be 80.1%. I suppose this is just the difficulty of trying to mathematically model human behaviour. At the beginning of the match for example, maybe the 2 teams tend to test each other out and less sustained effort is made to score. And maybe near the end more attacking efforts are made. In which case the chances of scoring varies as a match progresses. Anyone have figures on this?
Quote:
But hang on a sec. He's a complete stranger. You don't trust him to pay up should you win. And he doesn't trust you. So when you strike up the bet each of you give the £10 you could lose to the barman. He looks after the money until the match finishes then gives the the person who wins his original £10 back plus his winnings. But the winnings will be slightly less than £10 because the bar man wants to be paid for his bother. In fact he normally takes 5% of the winnings. So the person who wins only gets a total of £19.50 back (his original £10 but £9.50 winnings). This is exactly how betfair works except you can bet against anyone in the world. But in the total goals market at betfair the commision paid is only 1% rather than 5%. That makes it much more likely you can be up over the long haul. So I'm not competing against professionals generally. Indeed I'm getting the prematch goal expectations from the bookies themselves (the spread firms). Shall address rest of your post later. 
15th April 2006, 05:08 AM  #36 
Banned
Join Date: Feb 2004
Posts: 7,675

Yes the probabilities can't decrease uniformly. I'm not thinking there.
I had in mind that the average number of goals in any remaining period should decrease uniformly. So with prematch expectation of 2.2 goals, the half time expectation with no goals scored so far would be half of that. But if I enter 1.1 and do a poisson distribution, the figures don't tally. Have to think about this. 
16th April 2006, 05:32 AM  #37 
Banned
Join Date: Feb 2004
Posts: 7,675

How do you link cells both ways? In other words changing the value in cell A so that it will automatically change the value in cell B to the same value, and also changing the value in cell B so that it will change the value in cell A to the same value?
Another question. Is it possible to keep changing a value in a cell so that after every 5 minute interval after a specific time that I can specify, the cell will display the value of a succession of differing cells? 
16th April 2006, 08:09 AM  #38 
TeaTime toad
Join Date: Mar 2005
Posts: 15,516

I don't think you can, that would be circular reference, because (fr instance) if b=a=b you will never get an answers unless you define either "a" or "b".
I don't think I'm explaining myself to clearly here. Right, if excel need a value of A to determine a value of B, but needs a value of B to determine a value of A, it cannot define either, as the equation will keep looping back on itself. ETA if you can be clear about exactly why you want a=b=a=b etc, maybe I can find a way around it. 
16th April 2006, 08:18 AM  #39 
Scholar
Join Date: May 2003
Posts: 61

Let me tell you a bit about the Poisson distribution; some of which you probably already know. This will help me to explain what went wrong with your reasoning.
The Poisson distribution is a probability distribution over the set of natural numbers, which means it assigns to each natural number a real number intended to represent the probability of that number being chosen. The Poisson distribution is the appropriate model whenever you're trying to determine the probability of a given number of "events" in a given interval subject to the two conditions:
"The Poisson distribution" is a bit of a misnomer, it's actually a family of distributions. To specify a particular Poisson distribution, you just need to specify a single parameter, which is usually denoted with the Greek letter lambda. Conveniently, this parameter turns out to be the average of the distribution, so if the average is known, you have your distribution. If you want to get Excel to calculate a Poisson distribution without using a builtin formula, use this equation: P(n goals) = exp(lambda) * lambda^n / n! At halftime, assuming there's no bias for scoring in one half or the other (maybe not a good assumption, but you're the one with the tables, so you can check), the current expectation is indeed a Poisson distribution with half the parameter of the initial one. But (important point) we can't get the numbers for this new distribution by rescaling the numbers from the old one to half. This is because there's more to the distribution that just the average. When you rescale to half, this has the effect of multiplying the average by half, just as you want. Unfortunately, it only has the effect of multiplying the standard deviation by (1/sqrt(2)). (Trust me on this one). So to get the correct result, you actually need to recalculate the Poisson distribution from scratch using the new parameter. Well, I was bored, so I made the spreadsheet in Excel. PM me if you'd like it emailed. Take a look at the formulas; I left in a column where lambda is calculated. You should be able to customize it to your liking. 
16th April 2006, 09:13 AM  #40 
Penultimate Amazing
Join Date: Feb 2004
Location: Puget Sound
Posts: 15,156

The Goal Seek feature might be applicable, no pun intended.

__________________
To survive election season on a skeptics forum, one must understand HymietheRobot.
My authority is total  Trump 

Bookmarks 
Thread Tools  

