When we solve problems and make decisions and judgments, we very often use mental shortcuts (so-called heuristics).
We use these heuristics if we have neither resources nor time to compare all available information before making a choice. In other words, heuristics ease the cognitive load of making a decision.
In general, heuristics can be helpful because they allow a quick and efficient problem-solving technique, but sometimes they result in systematic erroneous beliefs and cognitive biases. Often, people fail to answer the following question because they exclude relevant information when they consider the probability:
"What is the probability that object X belongs to class Y?
Tversky & Kahneman (1974) were the first to suggest that people rely on the representativeness heuristics in answering such questions. Representativeness heurstic means that probabilities are evaluated by the degree to which X is representative of Y (i.e., how much X resembles Y). The representative heuristic is just one type of heurstic wherein one relies on past experiences and mental concepts of what something should look like when making a decision (p. 1124):
"(...) the probability that Steve is a librarian, for example, is assessed by the degree to which he is representative of, or similar to, the stereotype of a librarian."
People tend to consider only the most prominent cues or representativeness (mental concepts), but the point is that these cues can be misleading. The authors highlight that there are six aspects related to probability that people often ignore whe they judge probabilities:
1. Insensitivity to prior probability of outcomes
People should consider the prior probability or base-rate frequency of the outcomes. For example, the fact that there are many more farmers than librarians in the population should enter into people's estimate of the probability that Steve is a librarian rather than a farmer.
More specifically, since he looks like a librarian, and that there are so few of them in the the population, it is much more likely that he is a librarian. In this way, people make predictions on the basis of base-rate frequencies (how many x's and how many y's).
2. Insensitivity to sample size
People assess the likelihood of a sample result by the similarity to the general population. For example, people deem the average height in a random sample of ten men to be 180 centimeters because this is the average height in the population of men. This tendency has been found in samples of 1000, 100, and 10 men.
In all cases, people tend to view the men as representative of the general population. However, the probability that ten men reflect the average height in a population is low because of the limited sample size. Thus, the fundament of statistics is not a part of people's intuitions, and as a result, they fail to make accurate predictions about probability.
3. Misconceptions of chance
Consider this example (p. 1125):
"In considering tosses of a coin for heads or tails, for example, people regard the sequence H-T-H-T-T-H to be more likely than the sequence H-H-H-T-T-T, which does not appear random, and also more likely than the sequence H-H-H-H-T-H, which does not represent the fairness of the coin."
A well-known consequence of this belief is the gambler's fallacy (p. 1125):
"After observing a long run of red on the roulette wheel. For example, most people erroneously believe that black is now due, presumably because the occurrence of black will result in a more representative sequence than the occurrence of an additional red."
Again, people only consider what they judge to be the "fairness of the coin". The authors (Tversky & Kahneman, 1974) further state that often chance is viewed as a self-correcting process in which a deviation in one direction induces a deviation in the opposite direction to restore equilibrium. This is a false belief as deviations are not corrected by chance, they are just diluted. People fail to make accurate predictions accordingly.
4. Insensitivity to predictability
Consider the following example (p. 1126):
"Suppose one is given a description of a company and is asked to predict its future profit. If the description of the company is very favorable, a very high profit will appear most representative of that description; if the description is mediocre, a mediocre performance will appear most representative."
This example illustrates that one does not consider the reliability and accuracy of the descriptions since s/he only considers whether they are favorable or not. This may result in wrong predictions about future values such as profit, or the result of a football game for that matter.
5. The illusion of validity
The illusion of validity occurs when people are too confident in their probability calculations. The consistency of patterns has been found to increase one's confidence in predicting the outputs. Consider this example (p. 1126):
"People express more confidence in predicting the final grade-point average of a student whose first-year record consists entirely of B's than in predicting the gradepoint average of a student whose firstyear record includes many A's and C's."
This confidence is rarely warranted because when such inputs are correlated it may, in fact, decrease accuracy (p. 1126):
"(...) A prediction based on several such inputs can achieve higher accuracy when they are independent of each other than when they are redundant or correlated."
6. Misconceptions of regression
The authors (Tversky & Kahneman, 1974) propose that regressions toward the mean is encountered in many instances throughout life such as the height of fathers and sons, the intelligence of husbands and wives, and the performance of individuals on consecutive examinations. Despite the fact that regression toward the mean so often occurs, people rarely recognize it. The authors further state that people do not expect it in many contexts where it is bound to occur and that they invent spurious causal explanations for it whenever it occurs.
Consider this example (p. 1126):
Suppose a large group of children has been examined on two equivalent versions of an aptitude test. If one selects ten children from among those who did best on one of the two versions, he will usually find their performance on the second version to be somewhat disappointing. Conversely, if one selects ten children from among those who did worst on one version, they will be found, on the average, to do somewhat better on the other version.
More? Check out Actor-Observer Bias: Why We See Others Differently