## Saturday 31 July 2010

### John Conway Ties an Impossible Knot

When I take my students to the Center for Mathematical Sciences in Cambridge for Lectures we always stop to see the Gates that have the chiral reflections of two knots with the same polynomial knot form, a Conway discovery.

While watching a video tribute to Martin Gardner, I came across Conway doing one of his mathematical tricks that I thought was interesting. See if you can do it. What he says is, "You know, it is impossible to tie a knot without leaving go of the ends of the string the way I just did."

Labels:
Impossible Knot,
John Conway

### The Marginal Economics of a Kindergarten Education

A recent study to extend the research begun with "Project Star" in Tennessee has suggested that the present value of a good kindergarten experience is $320,000 a year. That is the present value of the "additional money that a full class of students can expect to earn over their careers" if they have an exceptional growth in the kindergarten year.

I came across this from an article in the NY Times on a tip from my colleague, Dru Martin.

As reported in the article, "Great teachers and early childhood programs can have a big short-term effect. But the impact tends to fade. By junior high and high school, children who had excellent early schooling do little better on tests than similar children who did not — which raises the demoralizing question of how much of a difference schools and teachers can make". But the research team went on to look at the later life of these same students.

"Just as in other studies, the Tennessee experiment found that some teachers were able to help students learn vastly more than other teachers. And just as in other studies, the effect largely disappeared by junior high, based on test scores. Yet when Mr. Chetty and his colleagues took another look at the students in adulthood, they discovered that the legacy of kindergarten had re-emerged. "

They examined the life paths of almost 12,000 children who had been part of a well-known education experiment in Tennessee in the 1980s. The children are now about 30, well started on their adult lives." One of the things that makes the Project Star subjects unique is that they were randomly placed in the classes with the potential to average out the impacts of cultural and social differences. The great Harvard Statistician, Frederick Mosteller, described the study as, "one of the most important educational investigations ever carried out and illustrates the kind and magnitude of research needed in the field of education to strengthen schools."

The original Star Project study had been designed to test the impact of class size on student achievement. The follow-up results of the Star Project in 1996, when the subjects were in high school, indicated that the smaller class sizes "appears to have cut the black-white gap in the probability of taking a college-entrance exam by more than half," according to Princeton University economist Dr. Alan B. Krueger, who researched test data linked to the Project STAR database. Additional benefits were cited related to graduation rates.

One wonders if we were to throw some of the Military budget into education research (and education) if we might not find out that great teachers at every level impact students in a similar way... and maybe we would go back and start de-centralizing some of these big rural county schools and replace the little rural community schools that used to be the hear of small towns and villages across America.

I remember once when John Dossey put a picture of his fifth grade (I think) class and teacher on the overhead, and then described the success of the class-members.

I came across this from an article in the NY Times on a tip from my colleague, Dru Martin.

As reported in the article, "Great teachers and early childhood programs can have a big short-term effect. But the impact tends to fade. By junior high and high school, children who had excellent early schooling do little better on tests than similar children who did not — which raises the demoralizing question of how much of a difference schools and teachers can make". But the research team went on to look at the later life of these same students.

"Just as in other studies, the Tennessee experiment found that some teachers were able to help students learn vastly more than other teachers. And just as in other studies, the effect largely disappeared by junior high, based on test scores. Yet when Mr. Chetty and his colleagues took another look at the students in adulthood, they discovered that the legacy of kindergarten had re-emerged. "

They examined the life paths of almost 12,000 children who had been part of a well-known education experiment in Tennessee in the 1980s. The children are now about 30, well started on their adult lives." One of the things that makes the Project Star subjects unique is that they were randomly placed in the classes with the potential to average out the impacts of cultural and social differences. The great Harvard Statistician, Frederick Mosteller, described the study as, "one of the most important educational investigations ever carried out and illustrates the kind and magnitude of research needed in the field of education to strengthen schools."

The original Star Project study had been designed to test the impact of class size on student achievement. The follow-up results of the Star Project in 1996, when the subjects were in high school, indicated that the smaller class sizes "appears to have cut the black-white gap in the probability of taking a college-entrance exam by more than half," according to Princeton University economist Dr. Alan B. Krueger, who researched test data linked to the Project STAR database. Additional benefits were cited related to graduation rates.

One wonders if we were to throw some of the Military budget into education research (and education) if we might not find out that great teachers at every level impact students in a similar way... and maybe we would go back and start de-centralizing some of these big rural county schools and replace the little rural community schools that used to be the hear of small towns and villages across America.

I remember once when John Dossey put a picture of his fifth grade (I think) class and teacher on the overhead, and then described the success of the class-members.

*John, If you see this, send me a copy of the picture and the description, and I will update this.*
Labels:
class size,
Educational research,
John Dossey,
Project Star

### ANYTHING???

OK, I'm a BIG fan of the power of math, but I'm not sure I subscribe to the "Math can do anything." statement.... Still, nice to see one out there cheering for our side...

I found this at "squareCirleZ", a new blog (for me) from a guy who seems to have a long and colorful educational history, Thanks "Zac".

I found this at "squareCirleZ", a new blog (for me) from a guy who seems to have a long and colorful educational history, Thanks "Zac".

Labels:
math video,
squareCircleD

## Thursday 29 July 2010

### Solving an Infinite Radical Chain

One of the blogs I've started to follow lately, Math Frolic, just posted the following algebra problem.

It is a nice problem, and the answer is given....and then??? well, Nothing actually..that was the end of the blog...

Now I like this blog, and "Shecky Riemann" who is the blogger does some really good stuff...but solving this for an answer is .... Ok... being kind let me just say... it was NOT good enough. Surely we could tease them a little... and ask them to explore the general expression Integer=Nested radicals of (r)... I explored that here, and more on this blog...

So here is an upgrade (I hope) to Shecky's post... lets take the 3 and replace it by any integer.... Solve it and look for a pattern... and let the mathematical term Pronic creep into your vocabulary if it hasn't yet, children.

It is a nice problem, and the answer is given....and then??? well, Nothing actually..that was the end of the blog...

Now I like this blog, and "Shecky Riemann" who is the blogger does some really good stuff...but solving this for an answer is .... Ok... being kind let me just say... it was NOT good enough. Surely we could tease them a little... and ask them to explore the general expression Integer=Nested radicals of (r)... I explored that here, and more on this blog...

So here is an upgrade (I hope) to Shecky's post... lets take the 3 and replace it by any integer.... Solve it and look for a pattern... and let the mathematical term Pronic creep into your vocabulary if it hasn't yet, children.

Labels:
infinte radical sequence,
pronic

## Monday 26 July 2010

### Standard Deviations of Sums of Distributions

A week or so ago I was at a textbook selection conference with a couple of really good teachers, and one of them (thanks, Dru) pulled out a copy of Robert Hayden's, "Advice to Mathematics Teachers on Evaluating Statistics Textbooks." I mention it now because it has two good pieces of advice. (Ok, it has way more pieces of good advice than that, but I'm mentioning these two in particular)

The first, I hope I follow, "...make sure the textbook

Ok, I'm gonna claim some "weasel" room here. First, I'm thinking more along the lines of an exercise to help the students understand why checking independence is so important, and not writing a textbook. Second, even Professor Bob himself says "While there may be places (such as Anscombe’s regression examples , in which a skillfully fabricated batch of numbers illustrates a pedagogical point,.." Ok, so the "skillfully" may not apply to what follows, but I hope the fabricated data at least help drive home a "pedagogical point".

I begin with two simple data populations, X= {1,1,1,2,2,2,3,3,3} and Y= {1,1,1,3,3,3,5,5,5}. Students who have learned the "Standard Deviation as Distance" approach can quickly check and find the standard deviation of the X population (or using a calculator) is sqrt(2/3)or appx .8165. For Y the std. dev. is 1.633. Perhaps for what we will be doing, we remind them that the variance of each is the square of the standard deviation, so Var(X)=2/3 and Var(Y)= 8/3.

So what happens if we add or subtract the populations?

X___1___1___1___2___2___2___3___3___3

Y___1___3___5___1___3___5___1___3___5.

and the sum and differences are then

X+Y =2___4___6___3___5___7___4___6___8 and

X-Y =0__-2__-4___1__-1__-3___2___0__-2

I think it is worth drawing the two resulting distributions because many students will NOT see that these are distributions are reflections of each other. So they should have exactly the same standard deviations (this takes a moments reflection for some students).

Wow, that's good news. If the populations items are independent of each other in the way they are combined, it doesn't matter if you add them or subtract them, the spread is the same since the two distributions are symmetric, which means the standard deviations should (and are) the same, about 1.8257. Even better, we can point out that the variance, 10/3, is simply the sum of the original variances, 2/3 + 8/3. For me it is worth pointing out this "Pythagorean" relationship, [StDev(X+Y)]

BUT... what if the original populations were NOT independent. (quick, think of two data sets that you would really combine in real life that

Well they might have a positive or a negative correlation, so we slightly rearrange our data sets and group lower numbers somewhat together (no Ones with the fives) like this..

X___1___1___1___2___2___2___3___3___3

Y___1___3___1___1___3___5___5___3___5.

Now our sums and differences are

X+Y =2___3___2___3___4___5___6___5___6 and

X-Y =0__-1___0___1___0__-1___0___1___0

We recognize quickly that the sets no longer have the same shapes. The distribution of sums is almost uniform with the peaks at the ends, while the difference distribution has two peaks closer to the center .

So what are the spread measures now. The standard deviation of the summation distribution is 2.26 or the square root of the variance of 46/9. The differences have a standard deviation of 1.247, the square root of a variance of 14/9, a really big difference. In fact, we help the students notice that the variances are the same distance from the equal variance of 10/3 = 30/9 when the populations were combined independently. The distribution of sums variance is 16/9 higher, the differences are 16/9 lower. Is this just a curious coincidence...(by now my students know that almost NOTHING I bring up is a "curious coincidence" ).

So how can we explain this difference. Slowly you lead their thinking...."If the distributions are NOT independent, they must be dependent,.... and there must be some relationship,..... some

StDev(X-Y)=[StDev(X)]

I hope before I get to this point I have laid a foundation for this by giving a short presentation based on a blog from John D Cook at "The Endeavor" that shows this geometrical relation between the correlation coefficient and the cosine of an angle. I hope to write a blog about this relationship in a more vector sense later.

All of this follows in the wake of a warning about non-real data from Professor Hayden, so it is important to follow up with real data that should bare this out. I'm thinking something simple like their own age in months and height. If it is true for all data sets, it should be true with the measures we have about them; but I am very willing to consider suggestions about a more appropriate data base.

The first, I hope I follow, "...make sure the textbook

*mentions*assumptions and teaches students to*check*them rather than*make*them." One of the ways I try to get students to*check*assumptions is to make them understand, as much as possible in the limited time of a AP course, the WHY. In order to do that, I frequently violate one of Professor Hayden's other pieces of wisdom; "Be wary of an author who is not familiar with enough real data sets to illustrate a textbook."Ok, I'm gonna claim some "weasel" room here. First, I'm thinking more along the lines of an exercise to help the students understand why checking independence is so important, and not writing a textbook. Second, even Professor Bob himself says "While there may be places (such as Anscombe’s regression examples , in which a skillfully fabricated batch of numbers illustrates a pedagogical point,.." Ok, so the "skillfully" may not apply to what follows, but I hope the fabricated data at least help drive home a "pedagogical point".

I begin with two simple data populations, X= {1,1,1,2,2,2,3,3,3} and Y= {1,1,1,3,3,3,5,5,5}. Students who have learned the "Standard Deviation as Distance" approach can quickly check and find the standard deviation of the X population (or using a calculator) is sqrt(2/3)or appx .8165. For Y the std. dev. is 1.633. Perhaps for what we will be doing, we remind them that the variance of each is the square of the standard deviation, so Var(X)=2/3 and Var(Y)= 8/3.

So what happens if we add or subtract the populations?

*It all depends!*If the populations are independent, then any X and any Y may (must?) be associated with equal probability. I illustrate this by pairing one of each X value with one of each Y.. (is it possible to have two distributions be independent without this type of each x with each y association?)X___1___1___1___2___2___2___3___3___3

Y___1___3___5___1___3___5___1___3___5.

and the sum and differences are then

X+Y =2___4___6___3___5___7___4___6___8 and

X-Y =0__-2__-4___1__-1__-3___2___0__-2

I think it is worth drawing the two resulting distributions because many students will NOT see that these are distributions are reflections of each other. So they should have exactly the same standard deviations (this takes a moments reflection for some students).

Wow, that's good news. If the populations items are independent of each other in the way they are combined, it doesn't matter if you add them or subtract them, the spread is the same since the two distributions are symmetric, which means the standard deviations should (and are) the same, about 1.8257. Even better, we can point out that the variance, 10/3, is simply the sum of the original variances, 2/3 + 8/3. For me it is worth pointing out this "Pythagorean" relationship, [StDev(X+Y)]

^{2}=[StDev(X)]^{2}+[StDev(Y)]^{2}, IFF X and Y are independently associated.....(*oops, I have been called out on this mistake... The statement is true IF x and y are independent, but also in any situation in which the correlation coefficient is zero... which does not necessarily require independence...see comment from "gasstationwithoutpumps" below..*.**"mia culpa" and thanks to "gas..."**BUT... what if the original populations were NOT independent. (quick, think of two data sets that you would really combine in real life that

*are totally*independent...better yet, send your ideas in the comments)Well they might have a positive or a negative correlation, so we slightly rearrange our data sets and group lower numbers somewhat together (no Ones with the fives) like this..

X___1___1___1___2___2___2___3___3___3

Y___1___3___1___1___3___5___5___3___5.

Now our sums and differences are

X+Y =2___3___2___3___4___5___6___5___6 and

X-Y =0__-1___0___1___0__-1___0___1___0

We recognize quickly that the sets no longer have the same shapes. The distribution of sums is almost uniform with the peaks at the ends, while the difference distribution has two peaks closer to the center .

So what are the spread measures now. The standard deviation of the summation distribution is 2.26 or the square root of the variance of 46/9. The differences have a standard deviation of 1.247, the square root of a variance of 14/9, a really big difference. In fact, we help the students notice that the variances are the same distance from the equal variance of 10/3 = 30/9 when the populations were combined independently. The distribution of sums variance is 16/9 higher, the differences are 16/9 lower. Is this just a curious coincidence...(by now my students know that almost NOTHING I bring up is a "curious coincidence" ).

So how can we explain this difference. Slowly you lead their thinking...."If the distributions are NOT independent, they must be dependent,.... and there must be some relationship,..... some

**measure**of how*UN-independent*they are." Eventually they will think of the correlation coefficient, r. In this association between X and Y they have a positive correlation of 2/3 ... can that help. If the relationship when the association was independent is "Pythagorean", maybe we can look for some extension of the Pythagorean theorem to help... Can we find something like the Law of Cosines that would tie the package together? After all, we need something that will add 16/9 to the sum distribution, and subtract the same amount for the differences... I can't imagine that I would have kids who would see this, and will probably lead them to observe that StDev(X+Y)=[StDev(X)]^{2}+[StDev(Y)]^{2}+2 r [StDev(X)][StDev(Y)]. They can quickly test that the change of sign leads toStDev(X-Y)=[StDev(X)]

^{2}+[StDev(Y)]^{2}- 2 r [StDev(X)][StDev(Y)].I hope before I get to this point I have laid a foundation for this by giving a short presentation based on a blog from John D Cook at "The Endeavor" that shows this geometrical relation between the correlation coefficient and the cosine of an angle. I hope to write a blog about this relationship in a more vector sense later.

All of this follows in the wake of a warning about non-real data from Professor Hayden, so it is important to follow up with real data that should bare this out. I'm thinking something simple like their own age in months and height. If it is true for all data sets, it should be true with the measures we have about them; but I am very willing to consider suggestions about a more appropriate data base.

## Friday 23 July 2010

### A Strange Sequence Produces a Stranger Sum

Dave Richeson just posted a note from the annual meeting of The Euler Society.... One that caught my eye was

2. Let S={4,8,9,16,25,27,32,36,...) be the set of all nontrivial powers (listed without repeats). Christian Goldbach discovered the following really neat summation...

If you take each number in the set, reduce it by one, then take the reciprocal...they add up to one.... or

This is amazing to me because it is brilliant, beautiful, and yet, tells us (almost)nothing about the original set.

Euler proved this, Dave tells us, by starting with the infinite harmonic series... go figure...so what do you do in YOUR spare time.....

2. Let S={4,8,9,16,25,27,32,36,...) be the set of all nontrivial powers (listed without repeats). Christian Goldbach discovered the following really neat summation...

If you take each number in the set, reduce it by one, then take the reciprocal...they add up to one.... or

This is amazing to me because it is brilliant, beautiful, and yet, tells us (almost)nothing about the original set.

Euler proved this, Dave tells us, by starting with the infinite harmonic series... go figure...so what do you do in YOUR spare time.....

Labels:
euler sum...

## Saturday 17 July 2010

### Fibonaci-licious

All math is beautiful, but some is more visual... Found this on John D Cook's "Endeavor" Blog and wanted to share..

Labels:
fibonacci video

## Thursday 15 July 2010

### Holy Moly! That's a Devilish Number

It is pretty well known and accepted that the number of the beast in the Book of Revelations in the Christian Bible is 666, although apparently 616 was found in some third century works. 666 is a much more interesting number, so I thought I would point out a couple of Devilish Oddities that I learned from "Mathematical Amazements and Surprises" by Alfred S. Posamentier and Ingmar Lehmann.

If you add up the first 6

^{2}or 36 natural numbers, 1+2+3+...+ 36; you get 666... Note that six is a triangular number, 36 of course is a square, and the result, 666, must also be triangular since it it the sum of a string of consecutive natural numbers starting with one.

Of course we can make our Devilish number more holy, by using the Divine number seven. If you take the first seven primes, 2, 3, 5, 7, 11, 13 and 17, and find the sum of their squares (Holy Pythagoras, Batman!) Yeah, you guessed it.... 666.

If you factor 666 to its primes, you get 2*3*3*37.... and the sum of the digits of the factors is 2+3+3+(3+7)= 6+6+6..... spooky, huh...

The good professor has several more that I will let you pursue on your own, but hey, just one more...but as a problem... You can insert plus signs in appropriate places in the sequence 1 2 3 4 5 6 7 8 9 and the sum will be 666 in more than one way... How many can you find.

That's Captain Marvel at the top, by the way. He was born full grown in a 1939/1940 Fawcett (later DC) Comic. He is up there because "Holy Moly" was perhaps invented, but certainly popularized by the comic book hero. Moly (or Moley) itself comes from a much earlier "comic book", Homer's Odyssey. It was the herb that Hermes gave to Odysseus to protect him from Circe's magic.

Math fun and Classical Literature in one place... how cool is that?

Labels:
666,
Alfred S. Posamentier,
mark of the beast

## Wednesday 14 July 2010

### Measure of Spread

My recent post on the standard deviation (a measure of spread) happened to coincide closely with a really nice post from Professor Robert W. Jernigan over at Statpics on how the average CEO's salary compared to the average salary in the US has changed between 1965 and 2005. The "graphs" on the two tables represent the average salary as a paper cup in each stack (the little brownish blob), and the average CEO salary is shown by a stack (pyramid?) of champagne glasses. The actual numbers he gives are from 25:1 in 1965 to 275:1 in 2005. The image is actually from an Art Exhibit in Detroit on "The American Dream".

Those who actually pay attention when I gripe know I hate the casual use of the word "average" without detail, but I assume this refers to the common "arithmetic average" salary in the US. If that is what is used, it means that the truth is actually more severe than the picture shows. Every time the CEO gets a big salary jump, they drag the average up a little more beyond what the "typical" or median income is.

Without much time this morning to search, I still found a quick indicator of wealth distribution in 2005 at this site.

Here is a breakdown of how household incomes fall percentages from top to bottom according to U.S Census Bureau statistics for 2005.

* Top 1%: Earns $350,000 or more

* Top 1.5%: Earns $250,000 or more

* Top 5%: Earns $167,000 or more

* Top 20%: Earns $92,000 or more

* Top 25%: Earns $77,500 or more

* Middle 20%: Earns $35,000 to $55,000

* Middle 33%: Earns $30,000 to $62,500

* Bottom 25%: Earns $0 to $22,500

* Bottom 20%: Earns $0 to $18,500

* Bottom 10%: Earns $0 to $10,500

And apparently it has gotten worse since. Just found this at a site called "Fair Econonmy" :

Hope you are getting your share.

CEO-WORKER DIVIDE: CEOs in the United States, despite our current hard economic times, continue to pocket outlandishly large pay packages. S&P 500 CEOs last year averaged $10.5 million,344 timesthe pay of typical American workers. Compensation levels for private investment fund managers soared even further out into the pay stratosphere. Last year, the top 50 hedge and private equity fund managers averaged $588 million each, more than 19,000 times as much as typical U.S. workers earned.

Labels:
ceo salary,
spread,
statpics blog

## Monday 12 July 2010

### Standard Deviation as Distance

Early today I had a conversation with another HS stats teacher that reminded me that when I was writing about vectors a while back I had not covered two nice uses in Stats. I hope to correct one of those today.

As we were talking I bemoaned the fact that few introductory textbooks seem to really help kids to develop any intuitive idea of what the standard deviation is or how it works. As we talked, I mentioned that I thought there was a geometric approach to the standard deviation that might help make it more clear. You be the judge.

I think the standard deviation is most easily approached as a distance (more specifically a sort of average of distances). Most high school stats students can quickly find the distance between two points on the plane using the square root of the sum of the squares of the differences (deviations) in each direction (dimension). For those who have never been introduced to it, only a few moments convinces them that it can generalize to n-dimensions. And in a few short minutes they can be finding the "distance" between (point)vectors in any number of dimensions, and many can quickly invent a shortcut to the calculation using the list functions of their calculators.

So why does the standard deviation as a distance make sense? The standard deviation is a measure of how much the data items "disagree" with each other. Start with two measures, and for the moment we use the unconventional notation of calling one of them x

Now if all our data sets had only two values (and statistics was REALLY EASY) then we could use this "distance" measure as a "standard measure". But one of the funny things about distance is that it grows with dimension, "sort of"... here is what I mean. In one dimension, the distance from (0) to (1) is one unit. In two dimensions the distance from (0,0) to (1,1) is farther, it's the square root of two. In three dimensions the distance from (0,0,0) to a point one away in each dimension is the square root of three. This would meant that the data set {1,3} would seem to be "less spread out" than {1,1,3,3}, which seems like a bad thing. To compensate, we simply divide this Pythagorean distance result by the square root of the dimension.

In effect then, the standard deviation of a population of values is the distance between the n dimensional points A={x

As a happy coincidence, John Cook at The Endeavour web site just posted a blog about the relationship between vector geometry and statistics when finding the standard deviation of a sum or difference of two distributions. A must read for intro stats teachers who want to be able to explain what happens (and why?) when the distributions are NOT independent.

As we were talking I bemoaned the fact that few introductory textbooks seem to really help kids to develop any intuitive idea of what the standard deviation is or how it works. As we talked, I mentioned that I thought there was a geometric approach to the standard deviation that might help make it more clear. You be the judge.

I think the standard deviation is most easily approached as a distance (more specifically a sort of average of distances). Most high school stats students can quickly find the distance between two points on the plane using the square root of the sum of the squares of the differences (deviations) in each direction (dimension). For those who have never been introduced to it, only a few moments convinces them that it can generalize to n-dimensions. And in a few short minutes they can be finding the "distance" between (point)vectors in any number of dimensions, and many can quickly invent a shortcut to the calculation using the list functions of their calculators.

So why does the standard deviation as a distance make sense? The standard deviation is a measure of how much the data items "disagree" with each other. Start with two measures, and for the moment we use the unconventional notation of calling one of them x

_{1}and the other y_{1}. Now if they agree perfectly, then they lie on the line y=x. If they don't, then they will be off the line by some distance. We begin by finding that distance. The perpendicular from the line y=x to the point (x_{1},y_{1}) would cross y=x at the point where the x and y values were the average of x_{1}and y_{1}, or at a point we call (xbar,xbar). That means the distance of the point (x_{1},y_{1}) from the line y=x is justNow if all our data sets had only two values (and statistics was REALLY EASY) then we could use this "distance" measure as a "standard measure". But one of the funny things about distance is that it grows with dimension, "sort of"... here is what I mean. In one dimension, the distance from (0) to (1) is one unit. In two dimensions the distance from (0,0) to (1,1) is farther, it's the square root of two. In three dimensions the distance from (0,0,0) to a point one away in each dimension is the square root of three. This would meant that the data set {1,3} would seem to be "less spread out" than {1,1,3,3}, which seems like a bad thing. To compensate, we simply divide this Pythagorean distance result by the square root of the dimension.

In effect then, the standard deviation of a population of values is the distance between the n dimensional points A={x

_{1},x_{2},x_{3}..x_{n}) and B= (x-bar,x-bar,.... x-bar) divided by the square root of n. In truth, it would seem there was no need to memorize a formula when the student understands it as a "mean distance".As a happy coincidence, John Cook at The Endeavour web site just posted a blog about the relationship between vector geometry and statistics when finding the standard deviation of a sum or difference of two distributions. A must read for intro stats teachers who want to be able to explain what happens (and why?) when the distributions are NOT independent.

Labels:
distance,
standard deviation,
vectors

## Sunday 4 July 2010

### A Pythagorean Generalization

Looking through the recent, and excellent, 67th Carnival of MathematicsI came across a link to a theorem at "Cut the Knot" that I had never known.

It begins with the simple fact that for a triangle ABC, Cos

But the part I loved is the more general extension... that for ANY triangle, Cos

Since only one of the angles can be obtuse (and hence the quantity 2 Cos(A)Cos(B)Cos(C) would be negative only in the obtuse case), we can use Cos

Can anyone tell me who/when this general identity was first discovered?

It begins with the simple fact that for a triangle ABC, Cos

^{2}(A) + Cos^{2}(B) + Cos^{2}(C) = 1 IFF (if and only if) the triangle is a right triangle..that is, one of Cos(A), Cos(B) or Cos(C) = 0.But the part I loved is the more general extension... that for ANY triangle, Cos

^{2}(A) + Cos^{2}(B) + Cos^{2}(C)+2 Cos(A)Cos(B)Cos(C) = 1.Since only one of the angles can be obtuse (and hence the quantity 2 Cos(A)Cos(B)Cos(C) would be negative only in the obtuse case), we can use Cos

^{2}(A) + Cos^{2}(B) + Cos^{2}(C) as a determinant for triangles. When the sum is > 1 the triangle is obtuse. If it is equal to one, the triangle is a right triangle; and if the sum is< 1, the triangle is acute. Not sure how I got so old without knowing that.Can anyone tell me who/when this general identity was first discovered?

Labels:
Pythagorean-like relations

## Saturday 3 July 2010

### Who has Pi On His Tombstone?

My family know I'm a math history nut, so I'm fair game for "Do you know..?" questions.

While waiting for the Blue Angels airshow performance at the 2010 Cherry Festival in Traverse City and picnicking with my family, my sister-in-law, Kerry Sue, popped up with "Who has the first fifteen (she obviously meant 35) digits of Pi on their tombstone?"

Fortunately, I guessed right, but realized I had never written anything (wasn't even sure I had seen a picture of the tombstone) about this memorial. With a little prompting I found this site at the AMS that has a picture of the memorial in Leiden (which I took part of above).

It seems that Kerry's question came almost exactly ten years after the memorial had been replaced. On July 5, 2000 a very special ceremony took place in the St.Pieterskerk (St.Peter's Church) at Leiden, the Netherlands . On that date a replica of the original tombstone of Ludolph van Ceulen was placed into the Church to replace the one which had disappeared. Van Ceulen bound the circumference of a circle with a diameter of one between two rational fractions, The smaller, expressed as a decimal is 3.14159265358979323846264338327950288, and the larger is the same up to the last digit.

Labels:
Pi,
van Ceulen

## Friday 2 July 2010

### USA = Bad Place to Give Birth?

From the Hartford Wellness Examiner web page:

I was surprised when I came across virtually the same statistic in an article in the Plus web magazine from the UK. The article introduced me to a new measurement, the micromort. A one in a million death risk. A nice explanation/comparison came with it. A micromort is the probability of death in a strange game of "Russian roulette" in which the player throws 20 coins into the air and is executed if all twenty land heads.

They have several animated graphs there including this one. Write down your guesses for each before you go and check... what is the safest way to travel?.

And here is the one that started me off on this search:

"With the amount of attention being given to health care reform these days it may shock many of you to learn that the number of women who die while giving birth in this country has continued to rise. According to reports published in the English medical journal The Lancet, "women giving birth in the United States die at more than four times the rate of those in Italy and twice as many as in Britain."

I was surprised when I came across virtually the same statistic in an article in the Plus web magazine from the UK. The article introduced me to a new measurement, the micromort. A one in a million death risk. A nice explanation/comparison came with it. A micromort is the probability of death in a strange game of "Russian roulette" in which the player throws 20 coins into the air and is executed if all twenty land heads.

They have several animated graphs there including this one. Write down your guesses for each before you go and check... what is the safest way to travel?.

And here is the one that started me off on this search:

Labels:
infant mortality,
micromort,
one-in-a-million

## Thursday 1 July 2010

### Nine is a Harmonious Number

David Bee sent this one to the AP Stats EDG and it caught my eye:

At the end of each Court term the NYTimes has a chart showing how

many agreements there were between each of the 9C2 = 36 possible

pairings of the nine Justices. (There was something someone in the

Forum posted earlier that reminded me of this but I don't recall

it.) [For example, the highest percentage of agreements was between

Justices Scalia and Thomas (92 percent) and the lowest was between

Justices Stevens and Thomas (60 percent). Thus, for the 36 pairings,

60% <= in agreement <= 92% --- math and stat teachers should have it

so good...;^).]

I guess I always thought questions that got to the Supreme Court would be pretty much 50/50 propositions, and was surprised to find out that Justice Stevens, the longest serving of the Justices, who had the lowest percentage of voting with the majority, was still in the majority 73% of the time; and Roberts and Kennedy agreed with the majority a whopping 92%/90% of the time.

So the stats question is: If all nine judges voted randomly on each decision, what percentage of the time would they be in the majority. (a computer simulation is un-acceptable, come let us reason together)..

Those who want the original NY Times article may find it here.

Labels:
binomial partitions,
supreme court

### Scatter Plot, at 95 MPH

The image above is another gem from the "Statpics" blog of Robert W. Jernigan. It is a plot of 1300 pitches from Yankees pitcher Mariano "Mo" Rivers. See the original here

Pretty great control, guess that's why they call it "painting the corners".

Labels:
baseball,
scatter plot,
statistics

Subscribe to:
Posts (Atom)