Ok, what if I offered you to join me in playing the following simple heads or tails game:
- Two fair coins are tossed.
- If any of the two coins shows head then
- you’ll win 60 cents if the other coin also shows head.
- you’ll lose 50 cents if the other coin shows tail.
- If both coins show tails then we simply continue to the next round.
I’m glad you join me. Take a seat, make yourself comfortable and let the games begin. Truth be told, your expected loss after 100 hundred rounds is 10 Euros (more on this later).
You cannot believe that? Let me reassure you that a lot of people think likewise. The key to understanding this lies in the fact that human beings faced with propability challenges often act completely irrational and, furthermore, this irrational behaviour is predictable. Daniel Kahnemann and Amos Tversky, both pioneers in cognitive science, demonstrated this irrational human behavior in numerous experiments in the 1980s. Such cognitive bias leads to irrational judgments in all disciplines including justice, medicine and economics by all kind of people.
In the next sections we will analyze the coin tossing game stated at the beginning in detail.
One approach in interfering the probabilities is counting the possibilities. This means listing all possible outcomes of the experiment explicitly. The set of all possible outcomes of the experiment is then called the sample space. This approach is one of the most effective ones when it comes to probability interference, but is only suited for small scale problems (i.e such problems where to possible number of outcomes is small).
Fortunately, our coin tossing game only has a few potential outcomes. Each coin tossed, assuming equal probabilities for landing on either side, has the possible outcomes of head (H) or tail (T). As we toss two coins we need to combine the possible outcomes. There are actually four possible outcomes (T,T), (H,T), (T,H) and (H,H). Each of the four events has the same probability of occurence: 1/4.
Rule #2.1 states that you win 60 cents when both coins show heads. From our sample space we can see that there is only one matching event (H,H). Rule #2.2 states that I win 50 cents when one coin shows head and the other shows tail. This is true for two events (H,T) and (T,H). These two events are mutually exclusive and because of the third probability axiom their joint probability can be computed by summing the individual probabilities. That is why I, later to be referred to as the author, win with probability of fifty percent. The remaining probabilities correspond to the event that matches rule #3.
Counting possibilities gets tedious for a large number of possibilities. Binomial coefficients provide a more general why of counting possibilities. Binomial coefficients calculate the number of sequences (permutations) of n objects where k objects are equal (and the other n-k objects are of another type).
In our coin tossing game n equals two (the number of coins) and k ranges from zero (no heads) to two (two heads). Here I defined heads to be the one type and tail to be the other type of event. An illustrative approach to binomial coefficients is the so-called Pascal’s triangle. Here is the Pascal’s triangle for first 6 rows.
Pascal’s triangle is related to the binomial coefficients in the following way: each triangle row (starting to count at zero) corresponds to a choice of n of the binomial coefficients. The elements in each row corresponds to a single choice of k (starting to count at zero). The number at row n, field k corresponds to the number of sequences of n objects where k objects are equal.
In our case we are interested in n equals two (two coins) which pins us to row number three of Pascal’s triangle. The elements in that row are 1 2 1: one possibility to choose zero heads, two possibilities to choose one head and one tail, one possibility to choose two heads. The total number of events is four (1+2+1 ) and the frequency of each event is again 1/4, 1/2, 1/4.
Constructing Pascal’s triangle (each number is the sum of the two numbers above it) is inappropriate for large numbers. Instead we’ll use the corresponding formula for binomial coefficients, bc.
The expected loss/win when playing a game is of fundamental importance to players. Normally, you wouldn’t want to join a game where your expected loss after a hundred rounds is 10 Euros, would you? So how to we calculate the expected value?
The expected value is the probability of success times the reward per success. My reward per success is 50 cents. My probability of success is 50 percent. Since, I lose 60 cents in 25 percent of the games played, we must take this also into account. In n rounds my expected win is therefore
For a hundred rounds that sums up to 10 Euros. Since my expected win is your expected loss (there is no-one else we pay) you lose the same amount.
Only in rare cases the expected loss with be exactly 10 Euros. Your expected loss will vary around the 10 Euros. In rare occurrences you’ll win much more, in rare occurrences you lose much more than 10 Euros. Most of the times you will lose between 8 and 12 Euros. The following diagram is a histogram of your expected loss created for 1000 games played, each a hundred rounds.
The tour is not over at this point. Continue to read with part two.
- Ruby scripts containing expected loss calculation, game simulation, binomial distribution calculation for big numbers and much more.