updated 1/10/10


Principal sources:
Paulos, J.A. 1988. Innumeracy: Mathematical Illiteracy and its Consequences. New York: Hill and Wang.
Best, Joel. 2001. Damned Lies and Statistics: Untangling Numbers from the Media, Politicians and Activists. Berkeley: UCLA Press.

Innumeracy is the mathematical equivalent of illiteracy, but is far more widespread and socially accepted than illiteracy. Some people are actually proud to be mathematically incompetent! These are "big-picture" people, artistic, their grand imaginations are not constrained by mere numbers; they dismiss "math people" as nerds or bean-counters.

Innumerage people often--

  • confuse very large or small orders of magnitude
  • fixate on insignificant but exotic risks while ignoring more significant but mundane risks
  • confuse correlation with causality
  • confuse precision with accuracy
  • see meaningless patterns in random occurrences
  • misunderstand conditional probabilities

Innumerate people are more easily taken in by false advertising claims, self-serving statistics disseminated by special-interest groups, pseudo-science, conspiracy theories, etc.

Number Numbness

Confusion over orders of magnitude is common: "A billion here, a billion there, pretty soon you’re talking real money!" (Everett Dirksen's wry comment about Congressional spending). Everyone knows that a million is more than a thousand, and a billion is more than a million, but our brains aren't naturally wired to compare things across three or six orders of magnitude.

Here are some mental exercises to train your mind to handle orders of magnitude a little better. Forget about precision; the objective is just to get the right order of magnitude.

  1. What's one ten-billionth of a hundred trillion?
  2. How fast does hair grow in miles/second?
  3. If Morris Library has 3 million print holdings (books and bound periodicals), how many pages does it have?
    How many printed characters?
  4. If you packed one mole (6.0221415E+23) of ping-pong balls into a cube, how long would each edge be?
  5. What is the hourly interest paid on the National Debt ($12.3 trillion as of February 2009)?
  6. How many people in the world choke to death on grapes each year?
  7. How many 20-year-old girls in the US are named Jessica? How many 20-year-old girls are named Jessica Simpson?
  8. How many gallons of water are in the oceans?
  9. If the Colossal Man (from the cheesy 1950's horror flick) was really 100 feet tall and normally proportioned, how much would he weigh?

Here are some general tricks to doing these problems:

  • Round numbers up or down to the lead digit, preferably one if it's close. There are 10 inches in a foot and 5,000 feet in a mile.
  • Multiply and divide by adding or subtracting exponents. Million, billion, trillion,...exponents increment by 3's. The US national debt is 12E+12, the population is 3E+8 so that's 4E+4 ($40,000) per person; the baby-boomers elected the clowns who spent this, and they're leaving you with the tab, kid!
  • Keep track of dimensions--lengths, areas, volumes, etc. A mole is E+24, so a cube filled with a mole of ping-pong balls is the cube root E+8 ping-pong balls on a side.  That's E+7 feet, or at 5E+3 feet/mile, 2E+3 (2,000) miles on a side--a pretty big cube.

Cultivate your ability to do basic math calculations in your head. It takes a little effort and practice, but it will help you spot erroneous numbers much more easily. When you started doing multi-digit addition, subtraction and multiplication, your teacher taught you to start with the rightmost digits and work to the left, and you often got bogged down before reaching the most important digits. Train yourself to work left to right instead, mentally incrementing digits when you "carry tens." A knowledge of squares helps with some quick multiplications: 13 x 17 = (15-2)(15+2) = 152 - 22 = 225 - 4 = 221. There are lots of tricks like these for refining your mental math skills.

A lot of people freeze up at percentages. Quick!--what's forty percent of forty? Seventy percent of thirty? Nobody has trouble with 4x4 or 7x3.

Risk Personalization and Dread

Irrational personalization of risk creates odd distortions in social policies. Americans dread relatively rare but bizarre causes of death (terrorism, shark attacks, kidnappings, airplane crashes) more than frequent, mundane causes (car accidents, Alzheimer’s disease). he monthly human slaughter on US highways is greater than all deaths from 9/11. So why are we spending $55 billion/year for homeland security versus only $18 billion/year to improve highway safety?

The media deserve some of the blame: "If it bleeds, it leads." When a child gets abducted and murdered by some pervert in Florida it gets national news coverage ("That could have been my child!"). The common preventable causes of child deaths (bike accidents, drownings, etc.) are rarely newsworthy.

News media depend on advertising revenues, which depend on how many readers/listeners/viewers they can attract. The media retain readers and viewers by personalizing odd stories to elicit primal emotional reactions like laughter or dread. A woman finds a live snake in her toilet. Killer bees are spreading across the country.

Stranger abductions of children are newsworthy because they elicit parent dread. This dread has a high social cost: parents are more reluctant to let their kids play outside, so they play video games and eat Doritos and get fat. America's child obesity epidemic will produce a lot of disabled, unemployable adults.

Paulos suggests using a logarithmic scale to keep relative risks in perspective. The typical American child's annual risk of kidnapping by a stranger is less than 1 in 5 million. The same child's annual risk of being killed riding a bicycle without a helmet is greater than 1 in 5 thousand.

Unfortunately, the innumerate person is unimpressed by this difference, and doesn't find emotional satisfaction in anything short of absolute zero risk. The lurid nature of kidnapping triggers an immediate fear reflex from the primative amygdala before the frontal cortex can even begin analyzing the odds in any rational way. He dismisses the relative probabilities.  "Well, what if it was your kid who got kidnapped, huh?" 

Americans are remarkably complacent about the high rate of automobile fatalities in the US. Because we tend to have inflated opinions of our driving skills, states need laws to make drivers purchase car insurance and wear seatbelts.

Cars have seat belts, air-bags, safety glass, ABS and crush-resistant frames. They are designed for safety, but more importantly, they are designed to make you feel safer. This creates a well-known "moral hazard" problem: if feeling safer means you drive a little faster, and faster drivers run over more pedestrians, then safety improvements in cars actually kill more pedestrians. There is also an "arms race" problem: in collisions between big SUV's and sedans, about 80% of the fatalities are sedan occupants. My Hummer's lousy fuel efficiency is really insurance: when I accidentally squash your little SmartCar, well, you're not going to look so smart after all!

Imagine a hook-up situation at a college party (strictly hypothetical, of course; you would never behave like this!): After four or five beers, you really like that cute guy/girl who seems very receptive, but you don't have any condoms! He/she is not a member of a known high-risk group for HIV, but you can never be sure. Which option do you choose?

  • Have unprotected sex, risking exposure to HIV.
  • Drive to the drugstore for condoms, risking an accident and/or DUI arrest.
  • Walk to the drugstore.

Statistically, the best option is to just go for it. In this context the risk of contracting AIDS from a single sexual contact without protection would be about 1 in 5 million. The social stigma of HIV is awful, but HIV-positive people can live long, happy lives. The risk of getting killed by driving drunk to the drugstore is at least a hundred times higher, and the risk of getting killed while walking drunk is about five times higher than the risk of driving drunk! (As Steven Leavitt says: "Friends don't let friends walk drunk!")

Miscalculating probabilities

A lot of gambling is motivated by erroneous assumptions about probabilities. Games of chance are constructed to generate independent outcomes: the odds of getting heads on the next toss of a coin are independent of whatever sequence of heads and/or tails has already occurred. After a sequence of heads some people will think tails are “due.” You can see old ladies pumping quarters into slot machines (excuse me—they’re “video lottery” machines!) at Delaware Park. Some of them wear adult diapers so they don’t have to give up their seat at a machine that is “due” to pay off if they need to urinate.

Casino games and lotteries are designed to disguise their long odds.  Winning a “pick six” lottery looks a lot easier than it is: pay $1 and pick six out of 49 numbers. Your actual odds of hitting the jackpot by picking all six are 1:13,983,816.  Your odds of picking five correct numbers are 1:54,201, and the average prize in the NJ Pick-Six for this is only $2,700.  Your odds of picking just four correct numbers are about 1:1,000 for a lousy $56 average payout.  And you have to pay taxes on your winnings.

The lottery is sometimes criticized as a “voluntary tax on stupidity:" the stupider you are, the more you play...and lose. As a revenue source for the state, the lottery functions as a highly regressive tax, extracting more money per-capita from poor (and innumerate) people than wealthy, better-educated people. Its only “virtue” is that it is voluntary, unlike other taxes. Gambling is the third-largest revenue source for the State of Delaware after personal income taxes and corporation taxes.

Casino profits are gamblers' losses. Like the state lottery, casinos celebrate a big winner and ignore the hundreds of losers. That's why slot machines make a big racket when they pay off and stay silent when they don't. Slots parlors like to have hundreds of machines running, so that players stay motivated by hearing frequent payoffs.

Gambling can be genuinely addictive. In brain scans of gambling addicts, subjects hearing the sound of a slot machine paying off reportedly exhibit the same neural responses that addicts exhibit when given drugs, although the stimulus is purely auditory.

The state is addicted to gambling too, and like any addict, it tries to rationalize its addiction, restricting gambling to a few extremely lucrative franchises, and funding a hotline and counseling services for “problem gamblers.”

If you want gambling with decent odds, play the stock market.

People often misunderstand how probabilities can change. The classic "Monty Hall problem" illustrates this. Suppose you are on the old “Let’s Make a Deal” TV game show and Monty (the host) asks you to choose one of three curtains, A, B or C. Behind one curtain is a really nice car; behind the other two are live goats. You state your choice. Then Monty (who knows what's behind the curtains) opens one of the other curtains to reveal a goat, and asks you if you'd like to switch your choice or stay with the original curtain. Since it's down to two curtains, it looks like you have an even chance of winning the car with either curtain. A lot of contestants did choose to stay put.

Here's why you should switch. Suppose you chose curtain A. In the 3-curtain choice your odds of winning were 1:3. When Monty knowingly eliminated a losing curtain, it didn't change those odds: you still have a 1:3 chance of winning with your original choice, which implies a 2:3 chance of winning by switching.

This is not the case if Monty eliminated one curtain at random. Then the conditional odds of winning would be the same for either remaining curtain: (1:2)(2:3) or 1:3. Obviously if Monty had eliminated a losing curtain before you chose, your odds would be 1:2 either way, but you chose first!

This problem was first discussed by Marilyn vos Savant in her newspaper column, and it fooled a lot of prominent mathematicians including Paul Erdos.

Other Forms of Innumeracy

Randomness vs. Pattern. Pattern recognition is the essence of neural intelligence. Organisms survive by recognizing and responding to different patterns of sensory perception of their environments. They learn to distinguish predators from food, suitable from unsuitable habitat, males of their species from females, etc.; otherwise they die. Pattern recognition is evolved.

Most patterns have some meaning, but some perceived patterns do not; they are purely specious, appearing by random chance. At a fundamental level, intelligence is the ability to distinguish meaningful from specious pattern.

Correlation vs. Causality. People see spurious correlations in random coincidences, and see causality in spurious correlations.  One manifestation of this is superstitions, which create expectations that tend to be self-fulfilling.  If a black cat crosses my path, I will be on the lookout for bad luck. I'm not actually unluckier on such days, but I'll be more aware of unlucky outcomes.  This logical fallacy is known as post hoc, ergo propter hoc (“after it, therefoe because of it”).

People are often suspicious of coincidences, and some "coincidences" are much more likely than you think: the chance that of 23 randomly-chosen people 2 will share the same birthday is 50 percent.  Some coincidences can be engineered not to look like coincidences.  At the start of football season you could mail letters to 64,000 people predicting the outcome (by point spread) of a football game: half the letters pick one team, half the other.  The next week you mail predictions to the 32,000 people who received correct picks the first week; the third week 16,000 predictions to those who received two correct picks; etc. Near the end of the season you would have dozens of people awed by your ability to make eight or nine correct predictions in a row, and willing to pay you serious money for your prediction on the next big game.  If only it were legal!

Precision vs. Accuracy. People also confuse precision with accuracy. A fairly precise (but very inaccurate) estimate of pi is 8.5278403. A more accurate (but less precise) estimate is 3.  False precision makes numerical estimates look more authoritative than they really are.

Pay attention to how precision can degrade and errors compound through mathematical calculations.  Suppose you estimate the three dimensions of a box with a +/-10% margin of error: 60” (+/-6”) wide by 100” (+/-10”)long by 50” (+/-5”) hi. The true volume is 300,000. Because your estimation error in judging lengths is probably systematic rather than random, the error range of your volume estimate isn't +/-10% (270,000 to 330,000).  It's 54 x 90 x 45 = 218,700 to 66 x 110 x 55 = 399,300.  Your compounded error on the volume estimate is more like +/-33%.

Pyramid schemes: An easy (and illegal) way to make money is to start a Ponzi (pyramid) scheme. Suppose I promise you a return of at least $31,000 for an up-front investment of only $100 in my “investment club.” I get 10 of you to pay me $100 each, and then I tell you how the “club” works. Each of you agrees to split all the money you receive with me. Each of you will recruit 10 of your friends to join our “club;” we will then have 10 payments of $100 to split. Then each of these friends will recruit 10 of their friends, and we will get 100 payments of $50 to split. And if they all recruit 10 friends, we will get another 1000 payments of $25 to split. And so on.

I would collect $1,000 (10 x $100) from each of you, plus $5,000 (100 x $50) from your friends, plus $25,000 (1,000 x $25) from your friends' friends. Then maybe I’d “retire” and leave you at the top of your own “club” so you’d hopefully finish with $31,000 too. Or maybe I’d try to milk it for another round. (Did I mention I only take cash?) Here are the potential receipts through five rounds of this game:

			"dues" rec'd	number of	total rec'd	cumulative
	round		 per player	 players	this round	total rec'd
	  1		   $ 100.00	      10	$     1,000	$     1,000
	  2		   $  50.00	     100	$     5,000	$     6,000
	  3		   $  25.00	   1,000	$    25,000	$    31,000
	  4		   $  12.50	  10,000	$   125,000	$   156,000
	  5		   $   6.25	 100,000	$   625,000	$   781,000

Ponzi schemes like this quickly collapse. Because of the exponential growth in “membership” required, they soon run out of people to recruit. For some entertaining historical accounts of scams such as this, check out the classic book Extraordinary Popular Delusions and the Madness of Crowds by Charles MacKay (1841).

The fallacy of averaging averages: Suppose during the first half of baseball season player A bats .300 in 200 at-bats, while player B bats .290 in 100 at-bats; and during the second half of the season, A bats .400 in 100 at-bats while B bats .390 in 200 at-bats. So A had the higher batting average in both halves of the season, and they both had the same number of at-bats. But if you do the math, it turns out that B had the higher batting average for the whole season: A got 60 + 40 = 100 hits in 300 total at-bats; B got 29 + 78 = 107 hits in 300 total at-bats.

Framing Problems

Context matters a lot. People can be misled by how questions are framed. Which of the two scenarios (taken from Kahneman and Tversky) is more likely?

  1. Ann graduated from college in 1988 with a degree in women's studies, has an assertive personality, is currently unmarried, and works in a bookstore.
  2. Ann graduated from college in 1988 with a degree in women's studies, has an assertive personality, is currently unmarried, works in a bookstore, and is active in women's rights issues.

The first scenario is more likely by definition, simply because it has one less condition. But many survey respondents consider the second scenario to be more "likely" because it conforms better to some stereotype of a feminist.

Here's a framing problem from Barry Schwartz's book The Paradox of Choice: Why More is Less (2004):

Imagine you are the only physician working in an extremely remote village, and 600 people have come down with a life-threatening disease.  Two possible treatments exist.  If you choose Treatment A, you will save exactly 200 people.  If you choose Treatment B, there is a one-third chance you will save all 600 people and a two-thirds chance you will save no one.  Which treatment would you choose, A or B?

A large majority of respondents to a survey posing this question said they would choose Treatment A, opting to save 200 people with certainty.  The identical choice problem was posed to another group of respondents, with the options framed differently:

If you choose Treatment C, 400 people will die. If you choose Treatment D, there is a one-third chance that no one will die, and a two-thirds chance that all 600 people will die.

In this case a large majority chose Treatment D, opting for the chance to save 400 people.  The two versions of the question are logically identical, but saving lives is a lot easier than deciding how many people will die.

Suppose you buy an expensive pair of shoes that turn out to be really uncomfortable. Thaler (cited in Schwartz) suggests that the more you paid for them, the more often you’ll try to wear them; eventually you’ll stop wearing them but they’ll sit in your closet longer; and they’ll take longer to “depreciate” psychologically before you throw finally give or throw them away.

Here's another framing problem from Kahneman and Tversky:  If you ask people to choose between a sure $30,000 versus an 80% chance of winning $40,000 and a 20% chance of winning nothing (expected win: $32,000), most people choose the sure $30,000. But asked to choose between a sure loss of $30,000 versus an 80% chance of losing $40,000 and a 20% chance of losing nothing (expected loss: $32,000), most people take the gamble. 

This indicates that people are risk-averse regarding gains, but risk-taking regarding loss avoidance. This win-loss asymmetry is consistent with what we know about contingent valuation analyses of WTP and WTAC for natural resource amenities (Bishop and Heberlein, et al.)

Science and Pseudo-Science

Innumerate people are often victims of pseudo-science. True science involves the formulation and testing of refutable hypotheses (Popper). Scientists don't really prove hypotheses; they can only disprove them. Science is falsifiable. Pseudo-science is not falsifiable. Karl Popper criticized Freudianism for its immunity to falsification. (Freudian analyst: "You're obviously in denial about this." Patient: "No, I'm not.")

Medicine is particularly prone to quackery, because most medical problems are either: (a) self-correcting; (b) self-stabilizing; or (c) fatal, but exhibit uneven patterns of decline and temporary improvement. Survivors may engage in post hoc, ergo propter hoc thinking, crediting the quack treatment for their cure; non-survivors aren't around to complain.

In one of his funkier free-market riffs, Milton Friedman criticized the AMA as a “guild” of doctors who profit by restricting entry into their profession. He basically recommends letting anybody practice medicine, and caveat emptor (“buyer beware. Over time the good doctors will profit as their reputations grow while the quacks will eventually go out of business (perhaps after unsuccessful experiments on you!).

"Occam's razor" is a logical principle favoring the simplest, most “parsimonious” explanation for anphenomenon.  (William of Occam was a 14-century English philosopher.)  Any model designed to test a hypothesis should be kept as simple as possible, so that its derived predictions are unambiguous and readily testable.

Sometimes this means sacrificing descriptive realism for predictive clarity. For example, economists can derive clear, testable predictions about consumer behavior by modeling consumers as utility-maximizing automatons; there is no practical need to describe the actual thought processes involved in consumer decision-making.

But one of the weaknesses of economics is the non-exclusivity of its models. The data we have may be consistent with any number of competing hypotheses expressible as alternative models. When we test our chosen model against the data, we either reject it as inconsistent with the data, or we fail to reject it. The process of winnowing out the wrong hypotheses is long and slow.

There is an asymmetry inherent in hypothesis testing. In the parlance of statisticians, a researcher tests an "alternative" hypothesis H1 against a "“nul"l” hypothesisH0 and either rejects H0 in favor of H1 fails to reject it. Hypotheses are never actually proved; they are only disproved. In testing a hypothesis, the researcher can be wrong in either of two ways:

  • she can reject H0 when it is true--a "Type 1" error; or
  • she can fail to reject H0 when it is false--a "Type 2" error.
Which error is costlier depends on context:

In criminal court, the court's initial presumption is the defendant is innocent (H0) while the district attorney's alternative hypothesis is that the suspect is guilty (H1). The high standard of evidence ("beyond a reasonable doubt") in criminal cases reflects a much higher concern for convicting innocent people (Type I error) than acquitting criminals (Type II error).

On the other hand, suppose a patient asks the doctor for antibiotics to treat his cold, which is most likely viral and unresponsive to antibiotics (H0), but could be bacterial (H1). The doctor probably prescribes the antibiotics, because treating a viral cold with antibiotics won't hurt the guy (Type I error) while denying the guy antibiotics when the cold is bacterial (Type II error) could be construed as malpractice.

Social Statistics

Most of the statistics reported in the media are “social statistics.” Best notes that social statistics have two purposes: the overt purpose is to quantify some characteristic of society; the covert purpose is to influence political opinion. Activists use statistics to dramatize social problems. Advocacy groups use statistics to compete for your attention. True or false, social statistics are part of the “social construction of knowledge.” We tend to accept social statistics uncritically because numbers sound authoritative. But we should always consider the source of the statistic (does he have some vested interest in the situation?); its political purpose; and its likely accuracy (which depends on how it was calculated). Unfortunately, many social statistics are quickly divorced from their sources and, right or wrong, take on lives of their own. They become “mutant” statistics, subject to misinterpretation and distortion.

Many “statistics” are little more than guesses. For example, crime victimization statistics are notoriously fuzzy. Some proportion of crimes don’t get reported (what criminologists call the “dark figure”). If you want to minimize the crime problem, you assume the dark figure is trivial or zero; if you want more police on the streets, you assume it is large. A lot depends on how the problem is defined.

In the 1980’s there was a surge of concern about “missing children,” with kids’ pictures on milk cartons and media claims of up to 2 million children per year going “missing” in the US, and up to 50,000 children per year abducted by strangers. But what is the age range of “children,” and how is “missing” defined? Do we count kids kept too long by non-custodial parents, safe at a known location? Teenage runaways? Schoolchildren who fell asleep on the bus home and wound up at the bus depot? The numbers are easily inflated to elicit predictable parent dread (discussed above). Before you swallow the estimate of 50,000 stranger abductions per year, consider that only about 70 child kidnappings are investigated by the FBI each year.

The "statistic" that 2 million children go missing each year only sounds plausible until you realize that's about 2% of all children in the US.  If "going missing" is random, that would mean almost 28% of children go missing at some time before their 16th birthdays!  You might have known a kid or two who went missing, but 28% of all the kids you have ever known?

Good statistics are more than guesses. They are based on clear and reasonable definitions, accurate measures and representative samples. A large sample size yields a statistic that is more precise and looks more authoritative, but a representative sample yields a statistic that is more accurate.

An old TV ad that claimed "Nine out of ten kids prefer Skippy" was likely based on ten kids of company employees in the marketing department, and maybe the choice was Skippy or nothing, but the statistic sounded vaguely authoritative. They could have stated a confidence interval for this statistic, but that would make them sound unsure of themselves.

On a yes-no question like this the confidence interval varies inversely with the square root of the sample size. Here if N=100, you would have a confidence interval of +/-3%. Most people don't understand statistics well enough to demand this kind of information, and they don't hold advertisers, politicians, public advocacy groups, etc. fully accountable for the bad statistics they promulgate.

Competing interest groups often argue over statistics—how they are collected and what the numbers mean. The 2000 Census included a “multi-racial” category for the first time. African-American activists opposed the inclusion of this category because it would reduce the number of Americans identifying themselves as “black,” while Native American groups supported the multi-racial category because many Americans are fractionally Native American but don’t otherwise identify themselves as such. The Census typically undercounts urban and minority populations more than rural and white populations. So Democrats (with larger urban and minority constituencies) tend to favor statistical corrections to the actual count data, based on post-enumeration surveys. Republicans (with larger rural and white constituencies) oppose such adjustments. The Census determines the decennial reapportionment of the US House of Representatives as well as the annual distribution of Federal monies for various programs. Census numbers are subjected to frequent court challenges by one side or the other because there is a lot riding on them!

Americans have something of a fetish for social statistics. Best describes four types of response to social statistics:

  • Some people are simply awestruck or fatalistic, and don’t react to statistics at all.
  • Many people are naïve and innumerate, accept statistics uncritically and disseminate them inaccurately; sometimes they evolve into urban legends.
  • Some people have learned to be cynical about all statistics (“you can prove anything with statistics”).
  • And a few people have the critical capabilities to discern good statistics from bad.

You can’t really understand a statistic without understanding the process that created it: what was counted and how, who did the counting and why, etc.