Friday, 19 February 2010

Math and Predicting Earthquakes.



I was just learning about the Gutenberg-Richter law. I always knew that the intensity of earthquakes followed a logarithmic scale, that an earthquake of magnitude 4 would be ten times as powerful as a magnitude three earthquake etc. But I had never known that the frequency of earthquakes follows such a scale.

The Gutenberg-Richter law says that in any general seismic zone (not just a single fault area) the number, N, of earthquakes greater than some given magnitude, M, will follow the equation log(N)= a-bM. So if b is about 1, and it seems from one paper that is pretty close, then the number of magnitude 6 earthquakes will be only 1/10 of the number of Magnitude 5 Earthquakes; and we should expect 1000 times as many of magnitude 3 than there were of magnitude 6. In short, lots of little earthquakes and not many big ones. It turns out, reading a couple of research papers from people who study this stuff, that the Southern California area has a b value of one.. how convenient.

One problem with the law is that you need, it seems, really accurate measures of a LOT of earthquakes. Here is how one professional wrote about it, "Seismic hazard analysis is very sensitive to the b value (slope). If earthquake rates are based on the number of M ≥ 4 earthquakes, for example, a b value error on the order of 0.05 will cause the number of M ≥ 7 earthquakes forecast to be wrong by 40%. The value of b and its error is often miscalculated in practice, however. The common technique of solving for b with a least squares fit to the logarithm of the data, for example, leads to an answer that is biased for small data sets and apparent errors that are much smaller than the real ones."

The preferred method uses Monte Carlo simulation, but to reach the 98% confidence level on a slope accurate to that .05 error requires about 2000 earthquake records.

In thinking about this, it seems earthquakes have sort of achieved a more or less constant level of energy emission. A bigger earthquake, with ten times the power only happens 1/10th as often.. so over time if my thinking is right on this, the seismic region would be giving off essentially the same amount of energy per unit of time... does that make sense?

Oh well, I'll think on it while I ponder how they fit a logistic model to predict the mean amount of damage based on measurements of "roof drift".

Ok, just picked this off a site with statistics on the New Madrid fault (top photo), the most active fault in the US east of the Rockies, "Based upon historically and instrumentally recorded earthquakes, some scientists suggest the probability for a magnitude 6.0 or greater earthquake is 25-40% in the next 50 years and a 7-10% probability for a magnitude 7.5-8.0 within the next 50 years ".. Not quite the ten percent reduction for a increase of one level in magnitude. Hmmmm.. so that means there must be a very different b value for the New Madrid fault than they use for Southern California... does that make sense? Am I close in assuming that if the numbers above are correct, that we would make the b value for New Madrid to be about .6??? Help, not sure I'm putting all the pieces together right here.

1 comment:

  1. Don't forget, the b value would relate to the number of earthquakes at a specific magnitude. The quote you cited in your last paragraph says the probability of a magnitude 6 or greater earthquake. So naturally that's going to be different than the probability of an earthquake between magnitude 6 and 7, and excluding all larger earthquakes.

    I didn't do the math to see if that makes the quoted figures add up, but it's definitely a relevant point.

    ReplyDelete