With the foundations of our project set, this first post seeks to demonstrate the reason for choosing Bayesian Inference as our probabilistic framework.
What is Probability?
When we say probability, at least in terms of this project in general and events wagering in particular, what we mean is:
“How certain (or uncertain) are we that a given outcome will occur?”
That is, without knowing what the future holds, how do we assign a value to our belief that a given outcome will occur?
A probability of 0 means that we are certain that an outcome has no chance of occurring, while a probability of 1 means that we are certain that an outcome has an absolute chance of occurring.
It’s a bit like this:
Most of what we experience in life isn’t all the way in the red toward 0 nor all the way to the right at 1. It’s somewhere in between.
But where, exactly?
Classical Probability and Poor Inferences
When we think of probability generally, six-sided dice and coins come most frequently to mind and are used very liberally to demonstrate examples of probability.
We assume that rolling a 6 on a six-side die is 1/6 and that getting a heads on a coin flip is 1/2.
These are true for the purposes of those thought experiments.
We might be tempted to extend these thought experiments further and apply them to the case of assigning probabilities to a two-player, winner-take-all game like a trading card game (TCG).
If Player A has played 1,000 games and we know that he or she has won 759 of those games, we might say that the probability that Player A win the 1,001st game is 0.759.
That would be a false assumption.
What this simple model lacks is conditionality.
In the 1,001st game, Player A will not play against some long-run average player under some long-run average conditions, but against a particular player under particular conditions.
What’s more, Player A may never have played this theoretical 1,001st opponent before and may never have seen this opponent’s strategy. In fact, that opponent’s strategy might be completely different from the previous 1,000 strategies Player A has encountered, and might be, something no one has ever seen or accounted for before.
What is Player A’s probability of winning now?
We must conclude that Player A’s probability of winning the 1,001st game is conditioned on these, and many more, circumstances.
Likewise, this 1,001st opponent, call him or her Player B, has a win probability that is likewise conditioned on Player A, Player A’s strategy, and the circumstances of their meeting to play.
Bayesian Inference
The only theoretical framework that allows us to assign these probabilities, in this example and the whole of this project, is Bayes Theorem.
Bayes’ Theorem states that:
[math] P(A|B)=\frac{P(A)P(B|A)}{P(B)}\ [/math]
Applied to our case, this means that the probability of Player A winning given that he or she plays against Player B, P(A|B), is equal to the probability that Player A win any match, P(A), times the probability that Player B win given that he or she plays against Player A, P(B|A), over the probability that Player B win any match, P(B).
As we’ll see in the next post, using Excel the and R programming language to construct matrixes of player and strategy combinations, this is easy to compute.
One thought on “1.1 Bayesian Inference & Conditional Probability in Trading Card Games”