Many political …err Facebook….arguments today boil down to warring anecdotes. Like this:
“I don’t feel sorry for government workers who aren’t getting paid. Why don’t they have any savings? I know a Fed worker with a really expensive car. Maybe he shouldn’t have spent so much on that car!”
A single tale of excess…or woe, for that matter…is no way to make a decision about an issue. We do it anyway, all the time. Humans have probably done this forever. It’s often called the small sample size problem. We extrapolate too much from our limited view of the world. This leads to warring anecdotes and something that looks a lot like the Blind Men and the Elephant problem
I explored this phenomenon recently in an essay for PeopleScience.com, part of an ongoing series about cognitive biases. It’s a problem that exists far beyond Facebook, creeping into every aspect of our lives. And of our workplaces. Overreacting to a single customer complaint, for example, is the bane of many workplaces. Here’s a sample of the piece, but you should read the whole thing at PeopleScience.com
Or read my entire Cognitive Bias series here
Humans are wired to see patterns, to see clusters, even when there isn’t nearly enough information to do so. An old newsroom joke is that three anecdotes make a trend story. Remember when “4 out of 5 dentists” recommended a certain brand of gum? That pitch was true, even if only five dentists were asked. Consumers are often fooled by such statements, which prey on a phenomenon called sample size insensitivity.
In baseball, fans boo when a star player goes three games without a hit, even though everyone pretty much agrees he’ll end the season with a .300 batting average. Web designers are told to change “buy” button colors from blue to red only a few hours after the launch of an e-commerce site, even if the launch took place at midnight. Workers sometimes earn promotions based on one successful project, and only later do executives find an underling really did all the work. “Jumping to conclusions” is natural, and often immediately satisfying – everyone likes to believe in the clusters they see – but basing decisions on small sample sizes can be a big problem.
Sample size insensitivity is perhaps the most bedeviling problem all scientists face. Rarely can we run an experiment on an entire population. Whether testing a new drug, planning a re-branding campaign, or surveying customers about store heat settings, we are forced to examine some kind of sample and extrapolate to the rest of the population. This leads to errors. Asking too few people about store temperature is just one of the many ways that sample selection can go awry.
It’s tempting to think, in our age of Big Data, that the solution to the small sample size problem is … a large sample size. Tempting, but wrong. Large samples can fail, too. The classic example comes from the 1936 election, when the largest public opinion poll in U.S. history to that point – run by a magazine called Literary Digest – predicted FDR would lose the presidential race.
That poll suffered from something called “ascertainment bias,” or uncoverage bias. Literary Digest tried to be inclusive. It sent sample ballots to 10 million people it found in phone books, association rosters, and so on. About 2.4 million were returned. But that method skewed the sample to middle and upper class Americans.
The piece goes on to discuss other reasons sample sizes fail: convenience sampling, survivorship bias, volunteer bias.
What are those? Read the whole piece.
Leave a Reply