Friday, August 21, 2009

Anybody Want to Play Some Risk?

So, I like the board game Risk but nobody else seems to, or well, nobody who is around at the moment. This means I have to play it on my own if I want to play it at all, which is actually ok because while I am pretty good at it, I have been wondering if I could come up with some specific general principles for playing the game. It is kind of hard to explain what I mean by "specific general principles," but what it amounts to is finding a good way of modeling the game mathematically. If you are not familiar with the game, I would suggest looking it up on Wikipedia or something because I don't feel like explaining it and probably you aren't interested, anyway.

It is intriguing mathematically because it seems as though it can be modeled partly through graph theory and partly through probability. Since battles are decided in the game by dice rolls (not just a simple 1-1 roll; it is a bit more complex, if you are not familiar), and dice rolls are in generally pretty easy to figure probabilities on, that seems like a reasonable starting point. If you have played the game, you know that it is unwise to attack territories when you have few armies, not just because you need to conserve armies in general, but also because the number of dice you can roll is dependent on the number of armies you are using to attack or defend. So, one general principle that Risk players would be familiar with is the principle of deciding whether or not to attack using the number of dice available (it is complicated by the move-at-least-as-many-armies-as-dice-rule, but that is sort of a minor consideration, and the number of dice one should use is not determined only by the probability; I am merely figuring part of the principle here). In finding the probability, I can determine better exactly how many dice one should use, rather than just a general principle.

As I said, probability with dice is generally not too hard, but there are a couple things to keep in mind here.

1. Ties go to defenders. This point is pretty easily compensated for when computing probabilities, as it just means changing a greater than to a greater than or equal to.

2. When someone uses more than one die, the highest attacker's and defender's respective die are compared, then the next highest. This complicates computation considerably. I will try to explain. I will notate P[x](y) to mean the probability of rolling a value of y with x dice. Note that that is probably not good notation, since what does it mean to roll y with 2 die? Does that mean one y? At least one y? All y? A quick and natural way of thinking is let y be a of the form (y1, y2, y3) for x = 3, (y1, y2) for x = 2 etc. Anyway, that will suffice for the moment.

With one die versus one die, we can figure that

P(attacker wins) = [SUMn=1|6](P1(n)*P1(n; that is 5/12+5/12+1/6=1. If you didn't follow all that notation, don't worry. The point is that this calculation is easy and requires at most a linear sum, which is easy enough that Gauss did it at eight years old. I know I have mentioned that before.

Now, let's consider a more difficult problem. What are my probabilities for winning with two dice versus two dice? If we would like to follow or previous solution's path, then we could say what is P2(n) and what is P2(
SA>SD, IA>ID -> win 2
SA>SD, IA win 1, lose 1
SAID -> win 1, lose 1
SA lose 2

What I mean here is S = higher of the two dice (S from superior) I = lower of the two dice (I from inferior). The case I=S is allowed, actually. A and D stand for attacker and defender respectively. "Win" and "lose" are from the attackers point of view, arbitrarily.

The advantage of approaching the problem from this direction seems to be that finding P2(S=n) is actually very simple. It turns out to be (2n-1)/36. To get this result, I just drew a grid of coordinate pairs and found which one had the higher die as n. If you do this, you will notice you get an L-shape (possibly rotated depending on where you start numbering) of length n. That means there are 2(n-1)+1=2n-1 squares in the grid with n as the higher of the pair, and clearly 36 total pairs. From there we can figure probabilities for SA>SD and SD>SA, give or take some equal signs, but hopefully you can fill in the blanks. Of course, if we want to know in general
P(SA>SD), we need to use a sum with n ranging from 1 to 6 as before. That is still only half the battle, since we have to figure the I's into our probability. That is, we are at a median step. It's not entirely useless, one might note, to note P(SA>SD) on its own. It does at least provide an upper bound for P(win 2) and a lower bound for P(win at least 1). Anyway, I haven't worked out much more, but that is merely because I don't like computing things too much.

You might notice I skipped the 1-2 and 2-1 cases here. They are actually fairly simple and you can derive them relatively quickly since you don't really have to worry about pairing dice, just figure P1(n)P2(
There remains the cases 3-2 and 3-1. If you figured 2-1, then 3-1 should be easy enough since only the first factor is changing. 3-2 is similar to 2-2, and all that should change is figuring P3(S=n) and P3(I=m|S=n). Don't worry about this notation, either. If you work out the problem, you will see what I am talking about, no doubt.

I will probably get to talking about some graph theory thoughts I have had about the game, but they are currently even less organized than these on probability.

No comments: