When it comes to particle physics … [m]easuring something once is meaningless because of the high degree of uncertainty involved in such exotic, small systems. Scientists rely on taking measurements over and over again — enough times to dismiss the chance of a fluke.
New research, out of the Large Haldron Collider in Switzerland, shows a 0.8% difference in the way matter and antimatter particles behave. This small difference could go a long way in explaining why the universe is made up mostly of matter today, even though in the beginning there were about equal amounts of matter and antimatter. It would mean that the current, best theory describing particle physics, the Standard Model, needs some significant tweaking.
0.8% is small, but significant. How confident are the physicists that their measurements are accurate? Well, the more measurements you take the more confident you can be in your average result, though you can never be 100% certain. The LHC scientists did enough measurements that they could calculate, statistically, that there is only a 0.05% chance that their measurement is wrong.
There was a neat little conference today, organized by LEGO’s Education division. I’ve been trying to figure out a way to include robotics in my math and science classes, but since I haven’t had the time to delve into it, I was wondering if the LEGO Robotics sets would be an easy way to get started. It turns out that they have a lot of lesson plans and curricula available that are geared for kids all the way from elementary to high school, so I’m seriously considering giving it a try.
Pedagogically, there are a lot of good reasons to integrate robotics into our classes, particularly as the cornerstone of a project-based-learning curriculum.
The act of building robots increases engagement in learning. Just like assembling Ikea furniture makes people like it better, when students build something the accomplishment means more to them.
Working on projects builds grit, because no good project can succeed without some obstacles that need to be overcome. Success comes through perseverance. Good projects build character.
The process of building robots provides a sequence of potential “figure it out” moments because of the all steps that go into it, especially when students get ambitious about their projects. And students learn a whole lot more when they discover things on their own.
Projects don’t instill the same stress to perform as do tests. Students learn that learning is a process where you use your strengths and supplement your weaknesses to achieve a goal. They learn that their worth is more than the value of an exam.
Projects promote creativity, not kill it like a lot of traditional education.
In terms of the curriculum, Physics and Math applications are the most obvious: think about combining electronics and simple machines, and moving robots around the room for geometry. A number of the presenters, Matthew Collier and Don Mugan for example, advocate for using it across the curriculum. Mugan calls it transdisciplinary education, where the engineering project is central to all the subjects (in English class students do research and write reports about their projects).
I’ve always favored this type of learning (Somewhat in the Air is a great example), but one has to watch out to make sure that you’re covering all the required topics for a particular subject. Going into one thing in depth usually means you have to sacrifice, for the moment at least, some width. The more you can get free of the strictures of traditional schooling the better, because then you don’t have to make sure you hit all the topics on the physics curriculum in the seemingly short year that you officially teach physics.
The key rules about implementation that I gleaned from presentations and conversations with teachers who use the LEGO robotics are that:
Journaling is essential. Students are going to learn a lot more if they have to plan out what they want to do, and how to do it, in a journal instead of just using trial-and-error playing with the robots.
Promote peer-teaching. I advocate peer teaching every chance I get; teaching is the best way to learn something yourself.
2 kids per kit. I heard this over and over again. There are ways of making larger groups work, but none are ideal.
A Plan of Action
So I’m going to try to start with the MINDSTORM educational kit, but this requires getting the standard programming software separately. One alternative would be to go with the retail kit, which is the same price and has the software (although I don’t know if anything else is missing).
I think, however, I’ll try to get the more advanced LabVIEW software that seems to be used usually for the high school projects that use the more sophisticated TETRIX parts but the same microcontroller brick as the MINDSTORM sets. LabVIEW might be a little trickier to learn, but it’s based on the program used by engineers on the job. Middle and high students should be able to handle it. But we’ll see.
Since LabVIEW is more powerful, it should ease the transition when I do upgrade to the TETRIX robots.
The one potential problem that came up, that actually affects both software packages, is that they work great for linear learners, but students with a more random access memory will likely have a harder time.
At any rate, not I have to find a MINDSTORM set to play with. Since I’m cheap I’ll start by asking around the school. Rumor has it that there was once a robotics club, so maybe someone has a set sitting around that I can burrow. We’ll see.
Sitting in a car that’s going around a sharp bend, its easy to feel like there’s a force pushing you against the side of the car. It’s called the centrifugal force, and while it’s real to you as you rotate with the car, if you look at things from the outside (from a frame of reference that’s not rotating) there’s really no force pushing you outward. The only force is the one keeping you in the car; the force of the side of the car on you. This is the centripetal force. Given all the potential for confusion, I created this little VPython model that mimics a sling.
Centripetal Force
In the model, you launch a ball and it goes off in a straight line. That’s inertia. An object will move in a straight line unless there’s some other force acting on it. When the ball hits the string, it catches and the string starts to pull on the ball, taking it away from its straight line trajectory. The force that pulls the ball away from its original straight path is the centripetal force.
Conservation of Angular Momentum
The ball rotating on the sling has an angular momentum (L) that’s equal to the velocity (v) times its mass (m) times its radius (r) away from the center.
L = mvr , angular momentum
Now, angular momentum is conserved, which means that if you shorten the string, reducing the radius, something else must increase to compensate. Since the mass can’t change, the velocity has to, and the ball speeds up.
I’ve put in a little ball at the end of the string that you can pull on to shorten the radius.
Tangential Velocity
Once the ball is attached to the string, the centripetal force will keep it moving in a circle. If you release the ball then it will fly off in a straight line in whatever direction it was going when you released it. With no forces acting on the ball, inertia says the ball will move in a straight line.
To better illustrate the ball’s motion off a tangent, I put in a target to aim for. It’s off the screen for the normal model view, but if you rotate the scene to look due north you’ll see it.
A series of still photographs of a projectile (soccer ball) in motion were used to determine the equation for the height of the ball (h(t) = 4.9 t2 + 14.2 t + 1.25), the initial velocity of the ball (14.2 m), the maximum height of the ball (11.6 m), and the time between each photograph (0.41 s). The problem was solved numerically using MS Excel’s Solver function. There are much easier ways of doing this, which we did not do.
Introduction
One of physics lab assignments I gave my students was to see if students could use a camera to capture a sequence of images of a projectile, plot the elevation of the projectile from the photographs, determine the constants in the parabolic equation for the height of the projectile, and, in so doing, determine the velocity at which the projectile was launched.
I offered my old, digital Pentax SLR that can take up to seven pictures in quick sequence and be set to fully manual. A digital video camera with a detailed timestamp would have been ideal, but we did not have one available at the time.
Now the easy way of getting the velocity data would be to estimate the heights (h) of the ball from the image using some sort of known reference (in this case the whiteboard), and determine the time between each photograph (Δt) by photographing a stopwatch using the same shutterspeed settings. After all, the average velocity of the ball between two images would be:
The reference whiteboard is four feet tall (1.22 m) in real life, but 51 pixels tall in the image. Using this ratio (i.e. 1.22 m = 51 px) we can convert the heights of the ball from pixels to meters:
Unfortunately, I think my students forgot to do the pictures of the stopwatch to get Δt, the time between each photograph. Since the lab reports are due on Monday, and it’s the weekend now I’m curious to see what they come up with.
However, I was wondering if they could use just the elevation data to back out the Δt. So I gave it a try myself. Even the easiest way of solving this problem is not trivial, in fact, I ended up resorting to Excel’s iterative solver to find the answers. While this procedure probably goes a little beyond what I expect from the typical high school physics student, more advanced students who are taking calculus might benefit.
Procedure
We took the reference whiteboard (1.21 m tall), a soccer ball, and the camera outside. The whiteboard was leant vertically against the post of the soccer goal. The ball was thrown vertically by a student standing next to the whiteboard (see Figure 1) while pictures were taken. The camera’s shutterspeed was 1/250th of a second. The distance from the camera to the person throwing the ball (and to the whiteboard) were not measured.
The procedure was repeated several times, but only one trail was used in this analysis.
The images were loaded onto a computer, and the program GIMP was used to determine the distance, in pixels, from the ground to the projectile. The size of the reference whiteboard, in pixels, was used to calculate the height of the soccer ball in meters.
The elevations measured off the photographs were then used to calculate the release velocity, time between snapshots, and maximum height of the ball.
The Equation for Elevation
I started with the fact that once the ball is released, the only force acting on it is the force of gravity. Since the mass of the ball does not change we only have to consider the acceleration due to gravity (-9.8 m/s2). I also neglect air resistance to make things easier.
Finding the Velocity Equation
Start with the fact that, acceleration is the rate of change of velocity with time. You can write it in the differential form:
so we integrate with respect to time to get the equation for velocity as a function of time:
where c is an unknown constant. What we do know though, is that at the beginning, when the ball is just launched, time is zero (t = 0) so cv becomes the initial velocity (v0) at which the ball is thrown:
at t = 0, v(0) = v0:
So our velocity equation becomes:
Finding the height equation
Now since we know that velocity is the rate of change of distance (in this case height) with time:
so we integrate again to find the height equation:
Similar to what we did with the velocity equation, to find the new constant c we consider what happens at the start time, when the ball is launched, and t = 0 and h(0) = h0;
so:
The constant is equal to the initial height of the ball — the height of the ball when it’s thrown. So we end up with the final equation:
Results
Solving all the unknowns
At this point, although we have an equation for the height of the ball, we don’t know the initial velocity (v0), nor do we know the initial height of the ball when it’s released (h0). And we still don’t know the time when the ball is at each position.
With that many unknowns we’d need the same number of independent equations to be able to solve for them all. It may be possible, but instead of analytically solving the equations I opted to take a numerical approach, and use Excel’s Solver function.
I started by setting up the equations to calculate the height of the ball at six different times to correspond with our six height measurements. It was necessary therefore to create a set of variables:
Time when we started taking pictures (t1): Since we don’t know how long after we threw the ball we started taking pictures, I made this a variable called t1.
The time between each picture (dt): I made the assumption that the time between each picture would be constant. The shutter speed was constant (1/250th of a second) so there is no obvious reason why the time should be different.
Initial velocity (v0): The initial upward speed at which the ball was thrown. Obviously, the faster the initial speed the higher the ball goes, so this is a fairly important parameter.
Initial height (h0): We also don’t precisely know how high the ball was when it was released, so this also needs to be a variable.
By defining the time between each picture as dt, we can write the time that each picture was taken in terms of the time of the initial picture (t1) and dt. After all the second picture would have been taken dt seconds after the first for a total time of:
similarly for all the pictures:
Now I set up an Excel spreadsheet and gave all the unknown variable values and initial value of 1:
Now I just had to run Solver and tell it that I wanted the Total Residual, which gives the difference between the h(1) equation’s values for height and the actual, measured values, to be as close to zero as possible. A perfect fit of the equation to the data would have a total residual of one, but that’s not possible when you’re dealing with real data.
Even so, I had to goose Solver a bit for it to produce reasonable numbers. I put in a few constraints:
dt >= 0: We could not have a negative time between pictures.
h0 <= 1.25: 1.25 meters seemed reasonable for the height at which the ball was released.
t1 <= 1: It also seemed reasonable that the time when the first picture was taken was less than one second after the ball was thrown.
I ran the Solver a few times, and had to reset dt to 0.5 at one point when it had become zero, but the final result looked remarkably good: the total difference between the modeled line and the actual data was only 0.113 meters.
So we found that:
Initial velocity: v0 = 14.2 m/s
Height at release: h0 = 1.25 m
Time between pictures: dt = 0.41 s
Time when the first picture was taken: t1 = 0.44 s
Which makes the height equation:
Using these constants in the height equation, we could see how good fit the height equation was to the data:
Maximum Height of the Ball
Finally, the maximum height of the ball can be read off the graph, but it can also be determined using the equation for the height of the ball:
We know that the maximum height is reached when the ball stops moving upward and starts to descend. At that point, the vertical velocity of the ball is zero. Since the velocity of the ball is the rate of change of height () we can differentiate the height equation to get an equation for velocity.
since we’ve determined that the initial velocity of the ball is 14.2 m/s we get:
when the velocity is zero (v = 0):
which can be solved for t to find that the time the ball reaches it’s maximum height is:
Putting this into the height equation:
gives:
Discussion
I’m quite happy with the way this project turned out. The fit between the modeled heights (h(t)) and the actual heights was very good.
My primary concern going into the project was that the distortion from the camera lens would make this technique impossible, but that appears not to be a significant problem.
Most of this calculation, including the somewhat tricky numerical solution using Solver could have been avoided if I’d calibrated the camera, simply by pointing it at a stopwatch (using the same shutterspeed as in the experiment) and measuring the time between snapshots. It will therefore be interesting to see if the actual time between shots (dt) is close to the dt of 0.41 seconds calculated by the model.
Finally, as noted above, a video camera with a timestamp would possibly be a more useful technology for this experiment.
Conclusion
It is possible to analyze the projectile path of an object using a series of snapshots, to determine the initial velocity of the projectile, its release height, and the time between snapshots, if you can assume that the time between snapshots is identical. There are, however, much easier methods of solving this problem.
I did a little exercise at the start of my high-school physics class today that introduced different types of experimental error. We’re starting the second quarter now and it’s time for their lab reports to including more discussion about potential sources of error, how they might fix some of them, and what they might mean.
One of the stairwells just outside the physics classroom wraps around nicely, so students could stand on the steps and, using stopwatches, time it as I dropped a tennis ball 5.3 meters, from the top banister to the floor below.
Random and Reading Errors
They had a variety of stopwatches, including a number of phones, at least one wristwatch, and a few of the classroom stopwatches that I had on hand. Some devices could do readings to one hundredth of a second, while others could only do tenths of a second. So you can see that there is some error just due to how detailed the measuring device can be read. We’ll call this the reading error. If the best value your stopwatch gives you is to the tenth of a second, then you have a reading error of plus or minus 0.1 seconds (±0.1 s). And you can’t do much about this other than get a better measuring device.
Another source of error is just due to random differences that will happen with every experimental trial. Maybe you were just a fraction of a second slower stopping your watch this time compared to the last. Maybe a slight gust of air slowed the balls fall when it dropped this time. This type of error is usually just called random error, and can only be reduced by taking more and more measurements.
Our combination of reading and random errors, meant that we had quite a wide range of results – ranging from a minimum time of 0.7 seconds, to a maximum of 1.2 seconds.
So what was the right answer?
Well, you can calculate the falling time if you know the distance (d) the ball fell (d = 5.3 m), and its acceleration due to gravity (g = 9.8 m/s2) using the equation:
which gives:
So while some individual measurements were off by over 30%, the average value was off by only 8%, which is a nice illustration of the phenomenon that the more measurements you take, the better your result. In fact, you can plot the improvement in the data by drawing a graph of how the average of the measurements improves with the number of measurements (n) you take.
More measurements reduce the random error, but you tend to get to a point of diminishing returns when you average just does not improve enough to make it worth the effort of taking more measurements. The graph shows the average slowly ramping up after you use five measurements. While there are statistical techniques that can help you determine how many samples are enough, you ultimately have to base you decision on how accurate you want to be and how much time and energy you want to spend on the project. Given the large range of values we have in this example, I would not want to use less than six measurements.
Systematic Error
But, as you can see from the graph, even with over a dozen measurements, the average measured value remains persistently lower than the calculated value. Why?
This is quite likely due to some systematic error in our experiment – an error you make every time you do the experiment. Systematic errors are the most interesting type of errors because they tell you that something in the way you’ve designed your experiment is faulty.
The most exciting type of systematic error would, in my opinion, be one caused by a fundamental error in your assumptions, because they challenge you to fundamentally reevaluate what you’re doing. The scientists who recently reported seeing particles moving faster than light made their discovery because there was a systematic error in their measurements – an error that may result in the rewriting of the laws of physics.
In our experiment, I calculated the time the tennis ball took to fall using the gravitational acceleration at the surface of the Earth (9.8 m/s2). One important force that I did not consider in the calculation was air resistance. Air resistance would slow down the ball every single time it was dropped. It would be a systematic error. In fact, we could use the error that shows up to actually calculate the force of the air resistance.
However, since air resistance would slow the ball down, it would take longer to hit the floor. Unfortunately, our measurements were shorter than the calculated falling time so air resistance is unlikely to explain our error. So we’re left with some error in how the experiment was done. And quite frankly, I’m not really sure what it is. I suspect it has to do with student’s reaction times – it probably took them longer to start their stopwatches when I dropped the ball than it did to stop them when the ball hit the floor – but I’m not sure. We’ll need further experiments to figure this one out.
In Conclusion
On reflection, I think I probably would have done better using a less dense ball, perhaps a styrofoam ball, that would be more affected by air resistance, so I can show how systematic errors can be useful.
Fortunately (sort of) in my demonstration I made an error in calculating the falling rate – I forgot to include the 2 under the square root sign – so I ended up with a much lower predicted falling time for the ball – which allowed me to go through a whole exercise showing the class how to use Excel’s Goal Seek function to figure out the deceleration due to air resistance.
My Excel Spreadsheet with all the data and calculations is included here.
There are quite a number of other things that I did not get into since I was trying to keep this exercise short (less than half an hour), but one key one would be using significant figures.
There are a number of good, but technical websites dealing with error analysis including this, this and this.
Working models of Leonardo Da Vinci’s devices, and video of his sketchbook, so inspired one student that she emulated Da Vinci’s style as she took her notes during our visit to the Da Vinci Machines Exhibition. While I’d asked them to bring their notebooks, I’d not said anything about taking notes (nor is there to be a quiz afterward) so it was very nice to see this student’s efforts. The exhibition is in St. Louis at the moment, until the end of the year.
What I liked most about the exhibit is that you can operate some of the reconstructions of flywheels, gears, pulleys, catapults, and other machines that came out of DaVinci’s notebooks.
Da Vinci did a lot with gears, inclined planes, pulleys and other combination of simple machines, so the exhibit is a nice introduction to mechanics in physics. The exhibition provides a teacher’s guide that’s useful in this regard.
It’s an excellent exhibition, especially if you spend some time playing with the machines.
In my last physics exam, I asked how many bananas would it take to deliver a fatal dose of radiation. This question came up when we were discussing different types of radiation and looking at this graph. One banana gives you about 0.1 microSieverts, while the usually fatal dosage is about 4 Sieverts. That means 4 million bananas. Michael Blastland uses the instantly fatal dosage of 8 Sieverts to make his estimate of eight million.
My students were insistent, “Would eating four million bananas really kill you with radiation?”
My answer was, “Yes. But other problems might arise if you try to eat four million bananas.”
Most power plants create electricity by spinning a magnet while it’s inside a coil of wire. That how coal power plants do it, it’s how hydroelectric power plants do it, it’s how wind plants do it, it’s even how nuclear power plants do it; solar power panels don’t do it this way, however. The coal and nuclear plants, for example, boil water to create steam which spins the turbine that rotates the magnet.
In theory, you can use any type of power source to spin the turbine, including people power. On bicycles, you can use them to power your lights. But because you’re now using some of your mechanical energy to create electricity, it will slow you down a bit. Newer, hub dynamos, however, are apparently quite efficient.
So, in theory, you could use any type of animal to generate electricity. Including, for example, using bugs to charge your iPod.
I love how he holds up the voltmeter 34 seconds into the video to prove that his device works.