1.6 Fine-Tuning the TCG Sportsbook Model

Fine-Tuning the TCG Sportsbook Model

Before we move on to simulating a TCG world tournament, there are a few issues that our working model needs to have addressed.

These issues are:

  1. How to account for new decks that enter the competitive format for which we have no previous data.
  2. How the sportsbook will manage the liabilities it owes to winning bettors on either side of a match.
  3. How to predict the winner of the entire tournament from the outset and how bet prices are placed on this outcome.

These three issues will become more central when we move into the next post in this series. For now, let’s grapple with how to deal with these issues.

“Rogue” Decks

In the previous post, we determined how much a competitor’s selection of deck contributes to his or her win probability. This parameter, called “gamma” (represented by the Greek letter of the same name: Γ) works with our Glicko2 algorithm to determine this.

We noted that over time, the metagame (or the competitive environment) evolves. New decks emerge to beat existing decks, making older decks obsolete as time goes on. This is a natural process in trading card games and a big part of their draw and fun.

But what gamma do we assign to a brand new deck that’s never been seen before? If the current metagame landscape supports 15 decks, and an 16th deck enters the field, how will that deck compare to the other 15 and vice versa?

Consider the following:

The table above shows which decks existed in the format during each period by marking those that were used in each period with a “Yes” (and those that weren’t used in a period with a “No”). For example, we can see that in Period 1, only Decks 1, 2, 3, and 4 existed. In Period 2, Deck 5 entered the format. In Period 3, Deck 6 entered the format, but Deck 1 left, etc.

The World Tournament will occur in a future period, Period 13. What if a new deck, Deck 16, which no one has ever seen before in a tournament setting and for which we have no data, enters the format?

Certainly, given time, we’ll know this. But as the sportsbook, we have to be ready to give bet prices (and rate the underlying probabilities) before this data is available.

How do we rate these newcomers, or “rogue” decks?

Rating New Decks

To the synthetic tournament data from the regular season, we’ve added new columns to indicate whether a deck used by a competitor is “Old” (meaning existing prior to that period) or “New” (meaning that it debuted during that period).

To keep things making sense, all decks (Decks 1 through 4) that existed in Period 1 were rated as “Old”, since these decks would have been presumed to exist before the start of the season.

Those added details look like the below:

The revised .csv file can be seen here.

Using this additional detail, we can see how well each of the 15 unique decks used during the tournament season fared against any new deck. This is summarized on the table below. A new row and column titled “New” has been added.

Deck123456789101112131415New
10.50000.56180.54340.50500.6029----------0.6029
20.43820.50000.50470.49690.52880.52710.4783--------0.5088
30.45660.49530.50000.53260.49070.49610.52160.52450.5357------0.5121
40.49500.50310.46740.50000.47360.50530.47930.48060.43300.50630.4286----0.4268
50.39710.47120.50930.52640.50000.51110.45470.48320.52530.47600.55950.48210.6136--0.5227
6-0.47290.50390.49470.48890.50000.50950.52410.54330.50850.52240.50450.51370.42860.71050.5368
7-0.52170.47840.52070.54530.49050.50000.51160.40360.54520.58910.44940.45190.53060.47730.5392
8--0.47550.51940.51680.47590.48840.50000.55030.54440.47950.58040.56670.36840.63890.5273
9--0.46430.56700.47470.45670.59640.44970.50000.54020.50390.44910.48650.54390.72920.5364
10---0.49380.52400.49150.45480.45560.45980.50000.50000.45920.55650.50000.44120.4970
11---0.57140.44050.47760.41090.52050.49610.50000.50000.47450.47220.39090.62500.4964
12----0.51790.49550.55060.41960.55090.54080.52550.50000.50830.65380.44120.5421
13----0.38640.48630.54810.43330.51350.44350.52780.49170.50000.46490.32350.4667
14-----0.57140.46940.63160.45610.50000.60910.34620.53510.50000.58330.5341
15-----0.28950.52270.36110.27080.55880.37500.55880.67650.41670.50000.5000
New0.39710.49120.48790.57320.47730.46320.46080.47270.46360.50300.50360.45790.53330.46590.50000.5000

This seems, however, an unsatisfactory solution. Yes, we know how each deck fared against a new entrant, but are all new entrants alike?

Does each existing deck have the same potential against each existing deck? Intuition tells us that this isn’t correct, that some deck types or strategies fare better against others because of their qualities. For instance, an “Aggro” deck may do very well against a “Mid Range” deck, but fare poorly against a “Control” deck.

We need to consider these “deck styles”.

Rating Deck Styles

Our season tournament data contains fifteen unique decks, numbered 1 through 15.

To these we added one of five deck styles: Aggro, Combo, Control, Mid Range, and Mill. These were assigned randomly.

The outcome of this assignment is as follows:

Deck Style
1 Control
2 Combo
3 Aggro
4 Combo
5 Mid Range
6 Mid Range
7 Control
8 Combo
9 Control
10 Control
11 Mill
12 Mill
13 Combo
14 Mid Range
15 Aggro

To the same season tournament data, we add the style detail to each deck for each matchup.

Those details look like the below:

The revised .csv file can be seen here.

Using this added detail, we can now see how well each deck style does against each deck style and vice versa. We’ve also kept the detail for matchups against “new” decks.

The data are summarized on the table below:

Deck123456789101112131415NewAggroComboControlMid RangeMill
10.50000.56180.54340.50500.6029----------0.60290.54340.53240.50000.6029-
20.43820.50000.50470.49690.52880.52710.4783--------0.50880.50470.49850.44900.5282-
30.45660.49530.50000.53260.49070.49610.52160.52450.5357------0.51210.50000.51840.49130.4931-
40.49500.50310.46740.50000.47360.50530.47930.48060.43300.50630.4286----0.42680.46740.49810.47950.48760.4286
50.39710.47120.50930.52640.50000.51110.45470.48320.52530.47600.55950.48210.6136--0.52270.50930.50250.47380.50520.5286
6-0.47290.50390.49470.48890.50000.50950.52410.54330.50850.52240.50450.51370.42860.71050.53680.51830.50330.51920.49110.5142
7-0.52170.47840.52070.54530.49050.50000.51160.40360.54520.58910.44940.45190.53060.47730.53920.47830.51090.48650.51710.5361
8--0.47550.51940.51680.47590.48840.50000.55030.54440.47950.58040.56670.36840.63890.52730.50000.51330.52350.48450.5233
9--0.46430.56700.47470.45670.59640.44970.50000.54020.50390.44910.48650.54390.72920.53640.56060.49090.54070.47520.4788
10---0.49380.52400.49150.45480.45560.45980.50000.50000.45920.55650.50000.44120.49700.44120.48450.47300.50420.4828
11---0.57140.44050.47760.41090.52050.49610.50000.50000.47450.47220.39090.62500.49640.62500.51380.46940.44870.4893
12----0.51790.49550.55060.41960.55090.54080.52550.50000.50830.65380.44120.54210.44120.45060.54740.53860.5129
13----0.38640.48630.54810.43330.51350.44350.52780.49170.50000.46490.32350.46670.32350.47250.50000.46380.5114
14-----0.57140.46940.63160.45610.50000.60910.34620.53510.50000.58330.53410.58330.57370.47450.53490.4813
15-----0.28950.52270.36110.27080.55880.37500.55880.67650.41670.50000.50000.50000.51430.43650.35140.4595
New0.39710.49120.48790.57320.47730.46320.46080.47270.46360.50300.50360.45790.53330.46590.50000.50000.48870.52080.46500.47030.4850
Aggro0.45660.49530.50000.53260.49070.48170.52170.50000.43940.55880.37500.55880.67650.41670.50000.51130.50000.51820.48380.48450.4595
Combo0.46760.50150.48160.50190.49750.49670.48910.48670.50910.51550.48620.54940.52750.42630.48570.47920.48180.50000.49380.49310.5118
Control0.50000.55100.50880.52050.52620.48080.51350.47650.45930.52700.53060.45260.50000.52550.56350.53500.51620.50620.50000.50520.4978
Mid Range0.39710.47180.50690.51240.49480.50890.48290.51550.52480.49580.55130.46140.53620.46510.64860.52970.51550.50690.49480.50000.5112
Mill---0.57140.47140.48580.46390.47670.52120.51720.51070.48710.48860.51870.54050.51500.54050.48820.50220.48880.5000

This is more satisfactory. We can theorize which new decks might enter the format and use these styles as comparisons for our gamma parameter.

For instance, looking at Period 12, which precedes the upcoming Period 13 in which the world tournament will occur, we can see only the following decks in the format (with their corresponding styles):

Deck Style
6 Mid Range
7 Control
8 Combo
9 Control
10 Control
11 Mill
12 Mill
13 Combo
14 Mid Range
15 Aggro

We can theorize that some of the older decks might drop out of the format by the time the world tournament occurs and about which new decks and deck styles will enter to fill the vacuum.

If by the time of the world tournament, Decks 6 through 10 drop out (because perhaps they are uniquely week to Deck 15, the latest entrant, which was designed to beat the old “best” decks), our format would look like this:

Deck Style
11 Mill
12 Mill
13 Combo
14 Mid Range
15 Aggro

What new decks will emerge to exploit the power vacuum left in such a competitive environment?

We now have the tools to consider this and give probabilities.

Sportsbook Risk Management

If the house does not carefully manage its risk, an upset outcome of an event can ruin it.

Book makers try to keep the liability, that is, the amount of money it will pay out to winners of one side, as equal as possible on both sides of an event.

Three Scenarios & Six Outcomes

Consider the following:

We have Player A vs. Opponent B in a match under three different scenarios.

  1. In Scenario 1, the liability for both players is completely independent.
  2. In Scenario 2, the liability for both players is identical.
  3. In Scenario 3, the liability for both players is managed to be within a narrow margin of one another.

As can be seen in the outcomes in the lower part of the table, unmanaged risk can ruin the house. A loss of 53.7% is simply catastrophic and absolutely unacceptable in outcome 1B (that is, in Scenario 1 if Player B wins). This potential loss, no matter how probable or improbable, is not weighed evenly by the potential upside (that is, in Scenario 1 if Player A wins), which gets up a GGR margin of 37.3%. We can see that we should expect, either way, a long run margin of -8.2%. Also unacceptable.

In Scenario 2, the liabilities on both sides are identical, and so, too are the payouts to players. This is ideal and lands us a tidy profit. But reality is never so good to us.

In Scenario 3, the sportsbook is managing its risk by limiting bets on either side so that that were within some close margin to one another. The fact that the profit in Scenario 3 is higher than in Scenario 2 is the result of randomness; we should expect perfect parity to be the best option, and with risk management, we’re trying to get as close to perfect parity as possible.

If the liability is equal on both sides, the house is indifferent to the outcome of the game. No matter which side wins, the house gets its cut. We’re happy with that. It this ideal—perfect parity of liability—that we’re seeking.

How We’ll Model Risk Management

So how do we put this into practice in our model?

In our next post in which we model the outcome of a world tournament, we’ll assume that our traders are managing risk by limiting bets on either side so that they are roughly equal.

In each simulated game, we will apply the following rules:

  1. The handle on “Player A” will be a random dollar amount between $5,000 and $10,000. This creates an independent variable.
  2. The handle on “Player B” will be based on the handle for Player A as follows:
      • The handle will be randomly determined to be that of Player A within -10% and +10%.
      • If our calculated win probability for Player B is less than 0.5, we will apply a divisor to the handle we take for the player (see below).
      • If our calculated win probability for Player B is greater than 0.5, we will not further modify the handle for Player B.

The divisor applied to a Player B with a less than 0.5 win probability is:

[math] Divisor = \frac{Moneyline (Player A)}{100} [/math]

This risk management model can thus be summarized as follows:

The more than Player A is favorited over Player B (or the more than Player B is an underdog) the more handle is limited for Player B. This is because, as in our example scenarios and outcomes above, an upset win by an underdog can wipe out the house.

The noise in the model (the -10 to +10% differential) helps keep things from being perfect. We shouldn’t expect perfect liability parity. This model helps bring us within striking distance of perfection and, I think, is reasonable for a real world application.

Predicting the Winner of a World Tournament

What is the probability that a given player invited to the world tournament will win the entire event? What place do we think each competitor will get? Can we set bet prices for these outcomes?

The Bradley-Terry Model (BTM) allows us to find reasonable outcomes.

BTM uses a “preference” algorithm that allows for a comparison between each competitor based on their relative strengths, then gives us a win probability (all of which, by the way, sum to 1, which means that these are probabilities for each competitor to win the whole shebang).

Without giving away too much about the next post, in which we simulate a world tournament, we can assume that we have the following players for which we wish to compute a win probability to win the entire event: Players A through H.

For all players, we calculate their win probabilities against one another (arbitrarily chosen for the sake of this example):

ABCDEFGH
A0.50000.50150.51750.68420.51530.65290.57010.6023
B0.49860.50000.45820.68610.51410.60010.57010.6028
C0.48260.54180.50000.66960.55550.63750.55280.5860
D0.31580.31400.33050.50000.32620.46630.37510.4124
E0.48480.48590.44450.67390.50000.58670.55610.5895
F0.34720.39990.36250.53390.41330.50000.40930.4461
G0.42990.43010.44730.62490.44410.59090.50000.5361
H0.39770.39730.41410.58760.41060.55400.46390.5000

To make life easy for us, we’ll employ the BTM with an excellent Excel plugin from Charles Zaiontz at Real Statistics.

With this table and the =BT_MODEL() function from this plug in, we get the following:

Player Probability
A 0.1420
B 0.1384
C 0.1414
D 0.0950
E 0.1350
F 0.1066
G 0.1251
H 0.1164

This means, for example, that Player A is estimated to have a 14.2% probability of winning the tournament between these 8 players. Likewise, Player B has a 13.84% probability.

We can assign bet prices to these probabilities using our previously established methods. We’ll assume an “overround” of 10% on the real computed probabilities of winning to bake in our bookmaker’s profit.

Doing so, we get the following moneyline odds:

PlayerProbabilityOverroundMoneyline
A0.14200.16+540
B0.13840.15+557
C0.14140.16+543
D0.09500.10+857
E0.13500.15+573
F0.10660.12+753
G0.12510.14+627
H0.11640.13+681

This means that, for example, a bet placed on Player A to win the tournament would win a total of $640 for a $100 stake.

These probabilities, and their corresponding prices, relate to each player at the outset of the tournament, before any games have been played. After each update (e.g., after games occur and players either win or lose), the field will shrink as players are eliminated and any new bets placed on the winner of the whole event will need updated prices. The BTM can still do this for us.

Suppose, after Round One, Players C, D, F, and H are eliminated, leaving only Players A, B, E, and G.

Any new bets placed on the ultimate winner would be based on these matchup probabilities:

ABEG
A0.50.5015260910.5153097260.570093835
B0.4986008490.50.5140529120.570052391
E0.4848170540.4859470880.50.556053431
G0.4299061650.4300757840.444075620.5

Applying the same methods, we come to updated moneyline odds of:

PlayerProbabilityOverroundMoneyline
A0.26080.2869+283
B0.26030.2864+284
E0.25330.2787+295
G0.22550.2480+344

A bet on Player A now to win the tournament will yield $383 for a $100 stake. As the outcomes become more certain, the odds become shorter, and the payouts smaller.

We’ll quote prices like these and track their profitability for the house in our simulation of the world tournament.

Next: Simulating the World Tournament

In the next post, we’ll select the top 32 players from the tournament season and invite them to play in a World Tournament.

These players will go head-to-head in a single round elimination tournament that will last five rounds until a winner is declared.

We’ll take bets on each match and track our profitability along the way.

We’ll simulate this 1,000 times and analyze the results.

Finally, we’ll put everything we’ve discussed in this project together and see if our model proves viable and where there might be opportunities for improvement.

7 thoughts on “1.6 Fine-Tuning the TCG Sportsbook Model”

Leave a Reply

Your email address will not be published. Required fields are marked *