08 November 2012

Micromorts & Microlives

Schneier on Security | Bruce Schneier | Micromorts

Here's a great concept: a micromort:
Shopping for coffee you would not ask for 0.00025 tons (unless you were naturally irritating), you would ask for 250 grams. In the same way, talking about a 1/125,000 or 0.000008 risk of death associated with a hang-gliding flight is rather awkward. With that in mind. Howard coined the term "microprobability" (μp) to refer to an event with a chance of 1 in 1 million and a 1 in 1 million chance of death he calls a "micromort" (μmt). We can now describe the risk of hang-gliding as 8 micromorts and you would have to drive around 3,000km in a car before accumulating a risk of 8 mt, which helps compare these two remote risks.
There's a related term, microlife, for things that reduce your lifespan. A microlife is 30 minutes off your life expectancy. So smoking two cigarettes has a cost of one microlife.
(1) Yes, this is a neat idea. The analogy to using grams (or pounds) rather than tons makes a lot of sense.

(2) People are tremendously bad at reasoning numerically about fractions. For instance, people tend to see the difference between 1/50,000 and 1/100,000 as the same as that between 1/200,000 and 1/250,000. This may help alleviate that problem since it holds the denominator constant and presents the numerator to people to reason about.

(3) Perhaps we need a unit like the horsepower: some sort of real world baseline for risk to which we can refer. Maybe everything can be compared to lightening strikes, so the chance of dying in a helicopter crash might be 0.25 L-strikes if four times as many deaths are caused by lightening as helicopter crashes. (I'm totally guessing about those values.)

My first thought was deaths-for-falling-down-stairs, but that is not nearly constant over time or across cultures. Plus I can't think of an easy abbreviation like "L-strike".

(4) I think micromorts and microlives may give more ammunition to scaremongers. It's already easy enough to scare people about tiny risks. I sure don't want to make it any easier. We would have to do some surveys to find out, but I suspect 8.6 micromorts sounds scarier than a 1 in 116,000 chance of dying.

(5) You can gather pretty good statistics for the chance of dying from hang gliding accidents, and get a pretty precise measure of the micromorts involved. Determining microlives involves much more sophisticated actuarial calculations, and I'm pretty certain it will end up being much less accurate. I would be very skeptical of someone who could tell me with any precision at all how many microlives eating a sunny side up egg or medium rare steak will cost me.

I'm all for doing things scientifically and quantitatively, but I am wary of pseduo-science and false precision. "All large calculations are wrong," as a wise man once said. I do not want to give technocrats and professors hungry for grants any more reason to go in front of Congressional hearings to talk about how they've calculated that allowing Americans to smoke weed/drink wine/drive Toyotas/use Sidecar for ride-sharing/eat imported haggis/etc is costing them 59.463271 microlives each.

Yeah, this is a bit immaterial. Micromorts/microlives are still a good idea. I'm just pointing out that they may be used for Evil as well as Good.

4 comments:

  1. These sort of risk numbers are only useful to insurance companies and governments. They tell you NOTHING WHATSOEVER about whether any individual will actually be harmed by the particular behavior, because there are always people who engage in the behavior in question and do not suffer from it. There are people who smoke 2 packs of cigarettes a day and die in their 90s from non-smoking causes. There are people who never smoke at all, but die of lung cancer in their 40s. Remember Jim Fisk, the famous marathon runner who died of a heart attack. He was living the supposedly "healthy" life, but he still died young.

    Insurance companies can use these numbers to set rates for insurance. They apply a certain amount of conservatism to the probability distribution function to account for policy holders who don't tell the truth, and to make a profit.

    Governments use these numbers to try to change behavior, which is what you are worried about. Governments start with information campaigns, and then proceed to nudging, taxing, and eventually bans, on the disfavored behavior. They care about the effect of the behavior on public finances and on politicians' chances for re-election. When the people behave in ways that cost the govt money, govt cares. Politicians care about how dying children and old people drive voters to prefer leaders who "care for the children and other victims". However, the politicians really HATE the concept of quantifying acceptable levels of risk for any behavior, because that implies that they think that there is an acceptable body count associated with that behavior. And politicians NEVER want the voters to think that THEY think that any level of death is acceptable.

    Micromorts is an interesting concept, but it is really meaningless when it comes to individuals. Think about all of those women who try to do everything they can to make sure that the child they conceive and bear and raise is the most perfect possible. They look at all of the statistical factors that have been associated with the most successful, and try to make sure that their particular children have every statistical benefit in their favor. Think about the helicopter moms.

    Explaining risk to the public in a way that really informs, while not alarming them, is really hard. See, e.g., the matter of "fiendishly toxic and deadly" radiation.

    ReplyDelete
  2. Explaining risk to the public in a way that really informs, while not alarming them, is really hard.

    Totally agree.

    Micromorts is an interesting concept, but it is really meaningless when it comes to individuals.

    I don't think I agree. Yes, a lot of people (most, even?) will misuse these stats and live by them slavishly. But I'd rather here my doctor tell me my morning bagel is costing me ~X microlives than simply having him tell me its bad for me. I know it's bad; I want to know *how* bad so I can weigh costs & benefits. Some people might even be less slavish about avoiding everything that's been labeled "bad" in some article they happened to read if they see that it's bad, but only by a vanishingly small amount.

    As a consumer, I'd rather know that a particular small car was X micromorts more dangerous than the full sized rather than simply knowing it's less safe than a full sized. Is that measurement going to be exact? No, but it's better than only knowing the sign of the measurement.

    I'll add an extra class of people who could really benefit from micromorts besides actuaries: engineers.

    ReplyDelete
    Replies
    1. "Micromorts is an interesting concept, but it is really meaningless when it comes to individuals.

      I don't think I agree. Yes, a lot of people (most, even?) will misuse these stats and live by them slavishly. But I'd rather here my doctor tell me my morning bagel is costing me ~X microlives than simply having him tell me its bad for me."

      The problem is that you don't really know whether the bagel is bad for you or maybe even good for you. All you know is that it has an adverse reaction on the entire population. The number of people for whom it is bad, is larger than the number of people for whom it is neutral or maybe even good. Unfortunately, we can't predict which people react which way, so your doctor is not really giving you good personal advice, because he doesn't know which group you fall into. Maybe you are one of those who would benefit from eating the bagel.

      You can probably do the same analysis with things like cars, where some features are bad for some people, but good for others. I have noticed lately that the cars I have been driving have large slanted front pillars that block my vision quite a bit, so that I don't notice cars coming from my left. In my pickup truck, the pillar is much thinner and more vertical, so that it doesn't obscure my field of vision as much, so I don't have the problem in it. This leads to people like me being in more accidents. However, maybe it is offset by having the stronger pillars protect me in a rollover accident.

      This sorts of analyses are really difficult to do properly, because there are so many factors to consider, and there is no validated data to support models of the individual phenomena. So, the "analysts" make a decision to only include the phenomena they care about. Do you remember all the advocacy of using margarine instead of butter in the 60s? (probably not, considering your age) The activists had data that showed that butter was less "safe" than margarine. Unfortunately, they did not consider trans-fats, which had an even worse effect than sat fats (we think). They did not understand all of the phenomena, but did a great experiment, instead. And now we wail about trans-fats.

      I also remember a study about 25 years ago about installing those additional stop lights in the middle at the back of a vehicle. The analysts installed extra lights on a bunch of postal vehicles, while leaving others without the lights, and they found that the vehicles with the extra lights had fewer rear-end collisions, so now we have extra lights on all new vehicles. I would really like to know whether the rear-end collision rate has gone down, or whether the initial results were just caused by the novelty effect of people seeing them for the first time. I bet the people who originally proposed these lights have not gone back and re-done the work to see if it was ultimately effective - there is no glory there.

      If you don't know what all the important phenomena are, and have at least some data to support a first-order approximation of how they affect the outcome, you really don't know even the sign of the effect. You are just guessing.

      Sorry for writing so much. You have a good blog. It is just that risk quantification is really hard to do. I once thought that it was the silver bullet that would prove nuclear energy was safe, but the more I learned about it, I realized that the uncertainties in risk quantification are too large for it to be of any use to anyone other than insurance companies and governments.

      Delete
  3. Don't apologize for writing too much. I'm happy to be able to talk about this sort of stuff with someone.

    "I bet the people who originally proposed these lights have not gone back and re-done the work to see if it was ultimately effective - there is no glory there."

    I agree completely. We need to put a lot more effort in verifying the effects of policy interventions, especially before rolling them out nation-wide.

    "Do you remember all the advocacy of using margarine instead of butter in the 60s? (probably not, considering your age) "

    No, I don't remember it first hand. Too young. But I know what you're talking about. I had a heated conversation with a friend of mine who's a chef who was bitching that we're the only nation in the world that uses so much margarin even though all the smart people know butter is better. He didn't like me pointing out that the reason we use so much margarin is because all the smart people insisted it was better a generation ago.

    "The problem is that you don't really know whether the bagel is bad for you or maybe even good for you. All you know is that it has an adverse reaction on the entire population. The number of people for whom it is bad, is larger than the number of people for whom it is neutral or maybe even good. Unfortunately, we can't predict which people react which way, so your doctor is not really giving you good personal advice, because he doesn't know which group you fall into. Maybe you are one of those who would benefit from eating the bagel."

    But that doesn't stop doctors (and others) from telling people now "this is good, this is bad" even though all they have to go on is the response to a majority of the population. Yes, more personalized recommendations are needed. I would prefer to be able to be told that Foo is X units bad for me personally. In the interim I'd still rather know that Foo is Y units of bad *on average* rather than just knowing Foo is bad on average.

    Yes, all large calculations are (somewhat) wrong. Doing these analysis correctly is very, very, very hard. You need to do things like assuming all drivers are roughly average height and weight or assuming patients respond to saturated fat the way the majority of people do. In light of all that I'm not comfortable with the precision and confidence people assign to the results of these calculations.

    But we're doing them anyway. We might as well have a decent unit of measure to report the results in.

    ReplyDelete