1.5 Building a Strategy Model: Deck Quality and the Metagame

Building a Strategy Model: Deck Quality and the Metagame

In the last post, we discussed assigning skill ratings to competitors in trading card games. These ratings can be used to make predictions between competitors about which side we believe would win or lose in a match.

This time, we take deck strategy into account.

Unlike with games like chess, trading card games are asymmetric in that each competitor brings different cards to the match. Some strategies have an edge on others. We’ll have to take this into account for our working model.

Why Deck Strategy Matters

In games like chess, both players sit down with the same game pieces. All the game pieces are completely symmetrical in their game abilities and positional states. There are no surprises that can’t already been seen from the start of the game. A player can’t pull out a special piece from a hidden part of the board to use, or sacrifice three pieces to give another one some special, hidden power, until the end of turn.

Trading card games, however, are exactly like this.

They are asymmetrical. Your deck isn’t like my deck, and even if it is, the exact cards and the order in which they are drawn are different.

Chess is a game that sees both players come to the game with pistols. It’s fair in this way.

Trading card games allow players to pick different kinds of weapons: swords, baseball bats, pistols, explosives, etc.

Some weapons to well against another (a sword vs. a baseball bat, for example), while others do less well against another (a sword vs. an automatic rifle).

Trading card games are unfair in this way, but it is something that deck building tries to take into account: if I can’t beat another strategy outright, how can I upset the opponent’s strategy or undermine it to make my win more likely.

Ultimately, all trading card game players know that the deck they choose has some great matchups and some bad ones. We need a way to take this reality into account in a way that games like chess, on which so much of the probability modelling is based, don’t.

Adding Deck Detail to Match Data

In the last post, we evaluated a .csv file that contained synthetic data on 500 competitors playing a total of 12,000 games over a tournament season.

To this data, we now add the deck used by each competitor in each match, as shown below:

You can see the revised .csv file here.

There are a total of 15 decks, numbered 1 through 15, randomly assigned to each competitor in each match.

The randomization is such that as the tournament season continues, older decks fall out of use as new decks come into use. This simulates a change in metagame over the course of the season.

Deck Matchup Probabilities

With the deck detail added to the seasonal tournament data, we can assess how well each deck does against each other deck, independent of the competitors that use these decks.

Comparing decks in Excel, we get the following:

Deck123456789101112131415
10.50000.56180.54340.50500.6029----------
20.43820.50000.50470.49690.52880.52710.4783--------
30.45660.49530.50000.53260.49070.49610.52160.52450.5357------
40.49500.50310.46740.50000.47360.50530.47930.48060.43300.50630.4286----
50.39710.47120.50930.52640.50000.51110.45470.48320.52530.47600.55950.48210.6136--
6-0.47290.50390.49470.48890.50000.50950.52410.54330.50850.52240.50450.51370.42860.7105
7-0.52170.47840.52070.54530.49050.50000.51160.40360.54520.58910.44940.45190.53060.4773
8--0.47550.51940.51680.47590.48840.50000.55030.54440.47950.58040.56670.36840.6389
9--0.46430.56700.47470.45670.59640.44970.50000.54020.50390.44910.48650.54390.7292
10---0.49380.52400.49150.45480.45560.45980.50000.50000.45920.55650.50000.4412
11---0.57140.44050.47760.41090.52050.49610.50000.50000.47450.47220.39090.6250
12----0.51790.49550.55060.41960.55090.54080.52550.50000.50830.65380.4412
13----0.38640.48630.54810.43330.51350.44350.52780.49170.50000.46490.3235
14-----0.57140.46940.63160.45610.50000.60910.34620.53510.50000.5833
15-----0.28950.52270.36110.27080.55880.37500.55880.67650.41670.5000

Here we assume that ties are valued at 0.5 wins.

Taking an example from the table, we can see that Deck 8 has a historical win percentage against Deck 10 of ~55%. Likewise, Deck 10 won against Deck 8 ~45% of the tine.

And as expected, each deck has precisely a 50% win probability against itself. A deck playing against itself will either win, lose, or draw, meaning that the opposing deck, itself, has the opposite outcome.

Half of all wins (win = 1), half of all losses (loss = 0), and half of all draws (draw = 0.5) come out to half of all outcomes. Thus, 50% win probability.

Gamma (Γ) & the Gamma Curve

The deck matchup win/loss percentages can serve in helping us determine how much of an edge to assign to competitors that use these decks.

The PlayerRatings package in R (that we’ve been using to calculate Glicko2 scores and predict win/loss probabilities), provides an open variable called gamma (abbreviated Γ).

  • Assigning Γ=0 gives neither competitor an edge.
  • If Γ<0, the “Player” (vs. “Opponent”) suffers a negative edge, one that subtracts from his or her probability of winning.
  • If Γ>0, the “Player” (vs. “Opponent”) gets a positive edge, one that adds to his or her probability of winning.

But how much Γ should be applied between competitors in a given match?

To help answer this, let’s turn to R with the following code:

# Step 1: Load the PlayerRatings package
library(PlayerRatings)

# Step 2: Set up two competitors with same rating, deviation, and  # volatility
startrate <- data.frame(Player=c("A", "B"), Rating=c(1500,1500), 
Deviation=c(350,350), Volatility=c(0.6,0.6))

# Step 3: Set up a match between these two equivalent players
samplematch <- data.frame(Period=1, Player="A", Opponent="B", Result=0.5)

# Step 4: Determine final ratings for both players  in this match
samplefinals <- glicko2(samplematch, status=startrate, tau=0.6)

# Step 5: Predict the win probabilities for the first player
# with gamma from -1000 to 1000 in 0.1 increments
gammacurve <- predict(samplefinals, newdata=data.frame(Period=2, Player="A", Opponent="B"), tng=1, gamma=seq(-1000, 1000, 0.1))

# Step 6: Convert output from Step 5 to a data frame (this will be 
# useful later)
as.data.frame(gammacurve)

What we’ve done, highlighted in Step 5, above, is predict the win probability between two evenly matched competitors.

We didn’t do this just once, but 20,001 times.

Each time we’ve used a different Γ, starting at -1000 and building to 1000 in increments of 0.1. In this range is also included Γ=0, which favors neither side.

We can now visualize this on a plot using R and the ggplot2 package.

# Plot the gamma curve from -1000 to 1000 with the ggplot2 package.

library(ggplot2)
ggplot(data=gammacurve2, mapping=aes(x=Gamma, y=Win_Prob))+geom_line(color="red", linewidth=1.25)+geom_line(data=gammacurve2, mapping=aes(x=0))+labs(title="Gamma & Win Probability", subtitle="NicholasABeaver.com", caption="Note: Assumes both players have identitical Glicko2 Ratings, Deviations, and Volatility.")

We get the following plot:

As expected, Γ=0 does not favor one competitor or another. The lower Γ gets, the more the “opponent” is favored. The higher Γ gets, the more the “player” is favored.

If we look carefully (and think about what Γ is measuring), the win probability can never reach as low as 0 or as high as 1. Γ closer to 0 has a larger effect than Γ farther away from it.

Γ is logarithmic in a way similar to that of Glicko2 ratings. Each increment of Γ close to 0 has a larger effect than the same increment further away from 0, and increasingly (or decreasingly) so, such that Γ can never reach 0 or 1. Just like win probabilities never reach 0 or 1.

We can export this data as a .csv file, which can serve us as a useful table.

To do that in R, we use the following code:

write.csv(gammacurve, "gamma_curve_1_to_1000_by_0.1.csv")

We can see the output .csv file here.

We’ll use this in the next section to illustrate how Γ helps us account for deck-vs-deck quality.

Putting Glicko2 and Deck Gamma Together

Let’s tie win probabilities and deck quality together to illustrate how they work.

We’ll make the following assumptions:

  • Player A uses Deck 9
  • Player B uses Deck 7
  • Both players have Glicko2 Ratings of 1500
  • Both players have Deviations (σ) of 350
  • Both players have Volatility (τ) of 0.6

Using our Deck Matchup Probabilities table, we can see that Player A’s Deck 9 has a 59.64% probability of beating Player B’s Deck 7, as shown below:

 

Looking up a Γ = ~0.5964 on out Γ output table from R, we see the following:

For the matchup of these two decks (9 vs. 7), our Γ = 113.5.

We can now use this in the predict function in R, setting “gamma” equal to 82.

predict(samplefinals, newdata=data.frame(Period=2, Player="A", Opponent="B"), tng=1, gamma=113.5)

The out put is:

[1] 0.5964414

This is what we expect, because, both players have exactly the same skill, the same deviation, and the same volatility.

The only variable that is different is the deck in use by either player (Deck 9 vs. Deck 11). Since Deck 9 has a ~59.64% probability against Deck 11, it makes perfect sense that given this matchup, the probability for Player A to beat Player B is ~59.64%. Everything else about the two competitors is the same.

We can carry out this same process for any two competitors using any two decks by doing the following:

  1. Find the Ratings, deviation (σ), and volatility (τ) for two given players.
  2. Find the Decks to be used by each player and consult the Deck Matchup Probability Chart for the decks’ win probabilities.
  3. Use the decks’ win probabilities to consult the Gamma (Γ) Chart and find the correct Γ to apply to the match.
  4. Set the predict function with the players’ skill details and correct Γ to find the win probability.

This is a somewhat manual process, which could be automated with software.

But this is another important step in our proof-of-concept.

Next, we’ll add some fine-tuning to our basic model, putting it’s various parts together into a cohesive whole.

Leave a Reply

Your email address will not be published. Required fields are marked *