The website Influence Explorer has a lot of easily accessible data about the contributions of companies and prominent people to lawmakers. As a resource for civics research it’s really nice, but the time series data also makes it a useful resource for math; algebra and pre-calculus, in particular.
Tag: pre-Calculus
Wiggle Matching: Sorting out the Global Warming Curve
To figure out if the climate is actually warming we need to extract from the global temperature curve all the wiggles caused by other things, like volcanic eruptions and El Nino/La Nina events. The resulting trend is quite striking.
I’m teaching pre-Calculus using a graphical approach, and my students’ latest project is to model the trends in the rising carbon dioxide record in a similar way. They’re matching curves (exponential, parabolic, sinusoidal) to the data and subtracting them till they get down to the background noise.
Regression with Gnumeric
Finally, I’ve found a spreadsheet application (Gnumeric) with a reliable Solver for doing regressions. And it’s free. The only tricky part is that there’s no native port for Macs; you have to use a command line package manager to install it (I used Fink).
Gnumeric, however, seems to be an excellent tool for data analysis.
Analyzing the 20th Century Carbon Dioxide Rise: A pre-calculus assignment
The carbon dioxide concentration record from Mona Loa in Hawaii is an excellent data set to work with in high-school mathematics classes for two key reasons.
The first has to do with the spark-the-imagination excitement that comes from being able to work with a live, real, scientific record (updated every month) that is so easy to grab (from Scrippts), and is extremely relavant given all the issues we’re dealing with regarding global climate change.
The second is that the data is very clearly the sum of two different types of functions. The exponential growth of CO2 concentration in the atmosphere over the last 60 years dominates, but does not swamp, the annual sinusoidal variability as local plants respond to the seasons.
Assignment
So here’s the assignment using the dataset (mona-loa-2012.xls or plain text mona-loa-2012.csv):
1. Identify the exponential growth function:
Add an exponential curve trendline in a spreadsheet program or manual regression. If using the regression (which I’ve found gives the best match) your equation should have the form:
while the built-in exponential trendline will usually give something simpler like:
2. Subtract the exponential function.
Put the exponential function (model) into your spreadsheet program and subtract it from data set. The result should be just the annual sinusoidal function.
If you look carefully you might also see what looks like a longer wavelength periodicity overlain on top of the annual cycle. You can attempt to extract if you wish.
3. Decipher the annual sinusoidal function
Try to match the stripped dataset with a sinusoidal function of the form:
A good place to start at finding the best-fit coefficients is by recognizing that:
- a = amplitude;
- b = frequency (which is the inverse of the wavelength;
- c = phase (to shift the curve left or right); and
- d = vertical offset (this sets the baseline of the curve.
Wrap up
Now you have a model for carbon dioxide concentration, so you should be able to predict, for example, what the concentration will be for each month in the years 2020, 2050 and 2100 if the trends continue as they have for the last 60 years. This is the first step in predicting annual temperatures based on increasing CO2 concentrations.
Seeing Functions at the City Museum
I asked my students to take pictures of the curves they found while on our field trip to the scrap metal playground that is the City Museum. The plan is to see if we can determine what functions best fit the curves. To do so, we need to transfer the curves from the images to a co-ordinate system. Since I’m primarily interested in what type of functions might best fit the data, the scale of the co-ordinates does not matter that much.
Feet, inches, meters, centimeters, pixels, or any other units can be used. In fact, I use a purely arbitrary set of coordinates in the image above. All I require is that the grid be evenly spaced (although the vertical and horizontal spacing don’t have to be the same, it’s more straightforward if they are).
Now we take a set of points that lie on our shape and try to match them to some sort of curve using a spreadsheet, and, if we’re able, least squares regression.
There were lots of shapes to choose from.
Inverse Relationships
Inverse relationships pop-up everywhere. They’re pretty common in physics (see Boyle’s Law for example: P ∝ 1/V), but there you sort-of expect them. You don’t quite expect to see them in the number of views of my blog posts, as are shown in the Popular Posts section of the column to the right.
Table 1: Views of the posts on the Montessori Muddle in the previous month as of October 16th, 2012.
You can plot these data to show the relationship.
And if you think about it, part of it sort of makes sense that this relationship should be inverse. After all, as you get to lower ranked (less visited) posts, the number of views should asymptotically approach zero.
Questions
So, given this data, can my pre-Calculus students find the equation for the best-fit inverse function? That way I could estimate how many hits my 20th or 100th ranked post gets per month.
Can my Calculus students use the function they come up with to estimate the total number of hits on all of my posts over the last month? Or even the top 20 most popular posts?
Curve Matching: Radioactive Decay and the Distance Between the Earth and the Sun
According to theory, radioactive elements will always at a constant rate, with a little variability due to randomness. What you should not expect to find is that the rate of decay changes with the distance of the Earth from the Sun.
In pre-Calculus, we’re figuring out how to match curves to data. The scientists in this study do something similar, trying to see what types of sinusoidal curves will match the data, then seeing what natural phenomena have the same period (the time it takes for one cycle).
Least Squares Regression with Excel
Here I’ll go over how to do Least Squares Regression, as simply as possibly, using Excel and its Solver*. Least Squares Regression can be used to match pretty much any type of function to any type of data. Most spreadsheet programs, like Excel, will do some curve matching for you when you add trendlines to graphs, but for more sophisticated work — in pre-Calculus and beyond for example — you need a more general approach.
We’ll start with a data set that we want to match. The atmospheric CO2 data from Mauna Loa is a convenient dataset. It’s also an important data set to model because matching a function to the data will allow us to predict the change in CO2 over the next 100 years, and those predictions are the ones the IPCC uses in their estimates of the impact of global climate change around the world. Billions, if not trillions of dollars depend on those predictions (and how governments decide to respond). For ease of demonstration, I’ve picked a few of the annual average CO2 concentration measurements at random to work with:
Table 1: CO2 Data
Year (x) | CO2 concentration (ppm) (y) |
---|---|
1959 | 315.97 |
1964 | 319.62 |
1969 | 324.62 |
1981 | 340.10 |
1985 | 346.04 |
1996 | 362.59 |
2003 | 375.77 |
Now, looking at the data, we can see there is some sort of trend. Our first decision is about how to model it.
Straight Line Model
The simplest approach, and the one we’ll try first is to fit a straight line through the data. (The final spreadsheet is here).
The equation of a straight line is:
In this case, as we can see from the graph (Figure 1), the y axis is CO2 concentration, and the x axis is the year. m and b are our two unknown constants that set the slope of the line (m) and move it up and down (b). If you need to get a better feel for what this means, try changing the slope and intercept in the Straight Line Grapher (I used this in class to demonstrate).
To match our straight line model to the data we need to determine the values of m and b that give the best fit.
So we create a spreadsheet with the data and in the adjacent column set up the straight line function by setting two cells to the values of the constants (m and b) and using those values to calculate the modeled CO2 concentration.
You’ll notice that I have values of m = 1 and b = -1600 . These are just my initial estimates of these values. The initial values are not crucial, as you’ll see, but are just there for me to check that my formula is in right.
Once I have the correct formulas in, I can play around with these values until my line matches the data. However, this is where Solver comes in to save a lot of time.
Finding a match
First we need a quantitative way of telling if we have a good match or not. We can start by taking the difference between each real data point and the modeled value. We’ll call this the error.
Now we could get a single value for the total error by adding up all the individual error values, or taking the average. However, as one of my students pointed out, we could end up with a case where the modeled line crossed through the data and we’d end up with positive differences of the data points above the line canceling out the negative differences of the data points below the line. His solution was to take the absolute value of the differences (errors) instead, which should actually work just as well in the method we’re taking here.
Instead of using the absolute value, however, we’ll square the errors instead. This achieves the same effect we need because the squares of both negative and positive numbers are positive. This approach is where the “squares” in Least Squares Regression comes from. The “Least” part comes from the fact that we’re now going to try adjusting our coefficients until we get the average of the squares of the errors to be as small as possible.
Now we take the calculate the average of the errors (the sum of the errors would work just as well) using the spreadsheet’s “AVERAGE” function.
Now we can adjust m and b and not just see how they compare to the data points on a graph, but know that we have the best fit if we minimize the average error.
Solver
Instead of changing m and b by hand, we can use Excel’s Solver to minimize the average error for us. The matching of a straight line can be done using algebra (e.g. here and here) but that approach won’t help us when we get to more complex functions.
We select the average error cell (E16) and tell Solver to minimize its value by changing the values of m and b.
When we hit solve, Solver should converge on a solution. Because of the way the problem is set up — using the square of the error for example — makes this a non-linear problem for the solver to solve. As of this writing, Excel is the only spreadsheet program I know of that has a built-in, non-linear solver.
You’ll notice that Solver’s solution gives:
- m = 1.360
- b = -2351.9
So now we have the equation for the best fit line (our model) being:
Prediction
Using this model we can predict the atmospheric CO2 concentration for the year 2050 by setting x = 2050 in the modeled equation, which gives 436.5 ppm.
The final spreadsheet I used to do these calculations can be found here.
Parabolic Model
As good as the straight line model seems to be, it does not account for the slight upward curve the data seems to have (see Figure 1). So instead of a straight line, perhaps we could try a parabolic function. Just as before, we’ll need to figure out the coefficients for the parabolic function. In this case the general equation we’ll use is:
and we’ll need to find the coefficients a, b and c. To see how changing these coefficients change the curve, you can play around with the interactive parabola model.
We set up our spreadsheet in the same way as for the straight line, but with our new coefficients.
Note that the only column that needs to change from the straight-line spreadsheet is the “Model” column (column C), to add the coefficients and to change the formula to that of a parabola.
Now we can use Solver to minimize the average of the squares of the errors, just as we did before, only having it change the three coefficients instead of two.
- a = 0.00060091
- b = -1.018
- c = 0.9933
and a final equation:
Prediction
The parabolic model predicts for the year 2050 (x = 2050) that the CO2 concentration will be 439.4 ppm.
The Excel spreadsheet for the parabolic model is here.
Which model is better?
The easiest way to compare the two models is to see which did better at minimizing the squared errors.
Table 2: Comparison of models.
Model | Average Squared Error |
---|---|
Straight Line | 4.94 |
Parabola | 4.47 |
So the parabolic model does better, but not by much.
Conclusions
With this approach, you can use any type of function — exponential, sinusoidal etc. — or combination of functions to match any data, and be able to compare among the models. A better measure of how well the models match the data is the regression coefficient or coefficient of determination, but I’ll save those for another post.
Conclusions
* UPDATE (Dec 31, 2012): Gnumeric’s Solver works.
* You’ll need to use Microsoft’s Excel, or better yet Gnumeric for this because, as of this writing, none of the other common spreadsheet options — OpenOffice, Google Docs, and Mac’s Calc — have a non-linear solver built in — and even some of the newer versions of Excel seem buggy.