Do you think you have a better chance of winning the lottery because you lost the last 5 times? Big mistake, but you are not alone in making a mistake. Our brain is bad in probabilities because of different thought biases.
Pierre has been sitting at the table for two hours now, facing the roulette wheel. The luck is not with him: he has already lost a lot … To try to recover, he decides to change his strategy. Instead of betting on the numbers, which pay more but are more difficult to guess, he is now betting on red, his lucky color. If red comes out, he pockets double his bet, which will limit his losses. But tonight, nothing is going any more: in half an hour, black comes out fourteen times in a row, and red only twice. Pierre did not give up, however, and continued to bet on red: “Black has come out so often that now it is surely the turn of red!” “
If the unfortunate gambler is convinced that luck must turn, it is because he is based on erroneous probabilistic reasoning, but very widespread, especially among fans of games of chance: researchers have also called it the bettor error or Monte-Carlo fallacy, named after the city famous for its casinos. This bias consists in believing that if an event occurs more frequently than one would expect during a specified period, it will occur less often during the following period, and vice versa.
The bettor’s mistake: believing that one draw affects the next
This is also our reasoning when we think we have a better chance of winning the lotto if the pot has not been won for a long time; By the way, this mistake is so common that the stakes increase as the date of the last win goes away (and the amount of the jackpot increases). However, in the case of roulette, the eventuality of red or black coming out is absolutely unpredictable, as the different attempts are independent of each other. On each roll, the probability of hitting red or black is the same, equal to 50%: what happens during one turn does not influence in any way what will happen on the next turn. Likewise, nothing excludes the lotto jackpot from being won twice in a row, in two consecutive completely independent draws.
Bettor error is one of the characteristic mental mechanisms of our probabilistic brain, that is, the way we assess odds. In fact, we do not naturally develop a knack for statistical calculation unless we are trained to thwart or inhibit our own mental automatisms.
Although it was only “theorized” in the 1970s, bettor error is a well-known trap. The mathematician Pierre-Simon de Laplace already mentioned it in 1796, when he described in his philosophical essay on probabilities the anguish of a future father who hoped that his child would be a boy: ardently to have a son, only to learn with difficulty the births of boys in the month when they were about to become fathers. Imagining that the ratio of these births to those of girls should be the same at the end of each month, they felt that boys already born made the next births of girls more likely. “
A variant of this mental mechanism described by Laplace involves a couple who, after having had several children of the same sex, continue to procreate, convinced that by increasing the size of their family, it will end up reflecting the frequency of the two. sexes in the general population. But this will obviously never be the case, because a family, even a large one, represents a sample too small to be statistically significant.
A small sample rarely represents the whole
The Monte-Carlo fallacy is thus based on a well-known bias: the law of small numbers, according to which even a small sample would be representative of the group as a whole. In the case of Pierre à la roulette, a much longer observation of the throws and a precise analysis of the frequency of appearance of red and black make it possible to demonstrate what we all know rationally: over a prolonged time, the two events follow each other. produce with an identical probability of 50% each.
Amos Tversky and Daniel Kahneman, 2002 Nobel Laureates in Economics, are the fathers of heuristics; this discipline studies precisely the mental shortcuts which facilitate, but sometimes also pollute, our decisions. Thus, economists have attributed errors like that of the bettor to the representativeness heuristic: it is a thought shortcut that reduces the solution to an inferential problem — namely a statistical estimate that determines the characteristics of a population from the observation of a sample of it — to a particularly simple judgment, often based on similarities. In other words, we usually calculate the probability of an event by basing our judgment on a limited number of items that we consider representative of a population.
Let us explain this representativeness bias a little better from the experiment developed by Tversky and Kahneman in 1973. The two researchers write the psychological profile of an imaginary student, Tom W. A group of volunteers must then determine the degree of Tom’s resemblance to the typical student profile of nine faculties, including law, engineering and archival professions, the specialty of librarians. Another group establishes the probability that Tom belongs to each of these nine faculties. And finally, a third group evaluates the percentage of representation of the nine faculties among the students of the campus, by determining how many young people are registered in each field. This is how Tversky and Kahneman describe Tom’s personality: “Tom is very reserved and very shy. He is always ready to help others, but he avoids being in too many groups and prefers quiet places… ”
After reading this profile, the majority of volunteers in the second group decide that Tom is a librarian. If they had thought more about it, they would have understood that the probability that Tom would actually be enrolled in the Faculty of Archival Trades is actually very low, with students at that university being the least represented on campus, as calculated by the third group. Yet most volunteers base their assessment on the principle of similarity: What are the characteristics of a librarian? A contemplative and silent personality, naturally! Like Tom.
We find it to be similar, and therefore likely
The most important information for calculating the probability requested by Tversky and Kahneman is the number of students per field of study. But when it comes to solving this type of problem, the human brain, instead of checking the frequency of a given event or a certain situation, prefers to use similarity judgments, which are much easier to access.
Sometimes the conjunction bias is added to the representativeness bias, as the following simple experiment shows. A group of people are invited to listen to the story of Linda, 32, single, independent and with a master’s degree in political philosophy. Linda is described as being very concerned with issues of social justice, to the point of often participating in protests. Volunteers are asked if Linda is more likely to be a bank worker or a bank worker and feminist activist.
Result: 90% of volunteers in this experiment choose the second option. However, this goes totally against one of the fundamental rules of probabilistic calculus: the simultaneous occurrence of two unrelated events — there is no direct relationship between working in a bank and working in a bank. to be a feminist — cannot be more likely than each of the two events taken separately. But we tend to “add” the probabilities of two conjoined facts.
In another experiment, Tversky and Kahneman reveal how difficult it is for our brains to use Bayesian probability models, which, as their name suggests, use Bayes’ theorem, a statistical instrument that describes the probability of an event according to the knowledge available on the conditions linked to this event. This way of calculating probabilities is useful when we want to evaluate inferences, that is, when the precision of the probabilistic calculation increases as we get more information about the conditions under which the event holds.
In their experiment, the two economists ask a group of volunteers about the following problem. A taxi is involved in a nighttime accident. Two taxi companies, the Verte and the Bleue, operate in the city: 85% of cars are affiliated with the Green company, 15% with the Bleue. A witness identifies the taxi involved in the collision as belonging to Compagnie Bleue. The court tests the reliability of this person under the same conditions as the night of the accident and concludes that this witness is able to correctly identify the two colors in 80% of the cases, and is wrong in 20% of the cases. What is the probability that the taxi at fault was indeed blue, knowing that the witness said it was?
The majority of participants in this experiment advance a result greater than 50%, or even 80%. But the real probability, calculated with Bayes’ theorem, is much lower: the chances of the witness correctly identifying a blue taxi are only 12% (15% of 80%). The chance that this person is mistaken and thinks the taxi is blue when it is green is 17% (20% of 85%). There is then a 29% chance (12% plus 17%) that the witness identifies the taxi as blue and 41% (12% divided by 29%) that the taxi identified as blue actually is.
An overvaluation of profits
This relatively simple exercise in statistics underlines how exact reasoning differs from the instinctive judgment that we adopt in most cases. However, although relatively well-known, the Monte-Carlo fallacy and the representativeness bias are errors that we often make … However, they are particularly formidable: they affect the assessment of risks and dangers, leading to overestimating the expected benefits. , which will never happen, and to underestimate the downsides, which are in fact more likely.
This observation takes on its full meaning in the field of medicine. If a specialist performs a diagnostic test for a disease that is reliable in 99% of cases, but this pathology has a population prevalence of one in 10,000 people, a patient with a positive test result would only have 1% of risk of being really sick… Why? Because the number of cases where you might not have the disease is so great that even with an error rate of 1%, the risk of the doctor getting it wrong is high. As often, the result is not intuitive: the test is reliable (much more than those to which we are really subjected in life!), But if the disease is rare, the probability of being actually sick once diagnosed remains very high. weak.