Saturday, August 22, 2009

Risky Business

I am going to talk more about Risk, so stop reading now if that bores you.

In the last post, I talked about coming up with some specific general principles for playing the game, and I probably have some 'splainin' to do about that. So, here goes.

Risk is not a solvable game. What I mean by a solvable game is a game for which there exists an algorithm that guarantees winning (or at least not losing) in any case. Tic-tac-toe, for example, is solvable in the sense that there is an algorithm that would look something like this

put X in center square

if O in ... square, put X in ... square

else ...

and so on and so forth. I am sort of appropriating and abbreviating some CompSci terminology here. It should be pretty clear, then, that for a solvable game, one should be able to write a computer program (which is an algorithm of sorts) to win at the game.

Not all games are solvable. For example, games of chance are by definition not solvable because they involve chance. There's nothing you can do in a fair game of craps to win every time; you depend on the dice. But not even all games of skill are solvable. A game such as poker is unsolvable because it depends on determining what an opponent will do in a given situation, which can't really be sussed out by an algorithm. I'm just handwaving my proofs here, but these are just some introductory comments anyway.

Risk is actually somewhat similar to poker in the respect that a large part of it is determined by what an opponent does. It is actually even further from solvable (note that this term is technically useless, but makes sense intuitively to an extent) because while both games involve an element of chance, in poker, even if chance is against you to some practically impossible degree, such as always dealing you the lowest possible hand, you can at least bluff your way out of it (assuming your opponent is unaware of your reality-bending bad luck). In Risk, if you lose all your dice rolls, you just lose, no matter how wisely you chose them in the first place.

That said, an experienced Risk player (or strategy game player in particular) will know that there are certain strategies that are more likely than others to yield positive returns. A simple example is to build up the territories that border opponents' territories, rather than ones that border one's own territories exclusively. The reason for this should be obvious, so I don't plan on investigating too thoroughly this principle, though one certainly could.

I thought I would just list some general principles and discuss how I am thinking about addressing them.

1. Be conservative in attacking.

In working with the probabilities of dice rolling, I am already taking the first step in mathematically grounding this vague principle. By just determining the probabilities for the various outcomes of various dice matchups, we can see the rather obvious idea that it is better to attack with more dice rather than fewer, and when your opponent can use fewer rather than more. The next step in this analysis is to extend from single dice roll to an analysis of an entire territory versus another, until one side loses all armies. This really just means applying the already known dice roll probabilities over and over again, which would hopefully lead to a convenient probability distribution, and thus expected value for the end state, but this is really a job for a computer. Already with just the dice rolls, computation is a pain for more than the fewest of dice, so iteration is going to be a huge headache.

There is another aspect to being conservative in attacking that is related but much harder to analyze. That is the aspect of how long a campaign should be. By campaign, I mean series of attacks, usually in a chain. It is clear without working out the probabilities that one's chances of success (retaining ones armies) decrease the more times one uses the armies to attack, and is further stretched on a campaign by the fact that at least one army must be left behind when taking a new territory. That alone is not a full order of difficulty harder than the previous problems, but when one considers defending territories after the turn, we find that we are really considering a different kind of problem, one that involves graph theory. I'll get back to this later.

2. Take at least one territory per turn.

This may not be so obvious to people who haven't played in a while because they may have forgotten about sets. When a player takes at least one territory in a turn, that player earns the right to take one card at the end of the turn (only one card, regardless of how many territories are taken). The cards have one of three symbols on them, and there are two wild cards. A set of either three of a kind or one of each yields armies when turned in at the beginning of a turn, with the number of armies yielded increasing with each set turned in (by anyone, not just said player). The increase in armies is quite rapid, meaning that sets (and thus cards) become valuable rapidly. For instance, the third set is worth 8 armies, which is more than the entire continent of Asia is, though, of course, each set is one-time-deal.

This seems a fairly straightforward proposal, but we could, of course, analyze it a bit with some probability. It would not be too difficult to determine the value in terms of armies of getting a card by determining the likelihood of completing a set with each card. A computation annoyance rather than difficulty arises in that the total number of cards is 44, that is, 42 territory cards plus 2 wildcards. Also, since other players are also getting cards, which we cannot see, we can only guess as to the distribution of cards left in the stack. Since this forces our guess at a probability to imprecision anyway, we might as well ignore wild cards and presume our likelihood of any given symbol is 1/3. We've now simplified the computations quite a bit, on the plus side.

One interesting thing to note is that the value of a card will change, not only based on the number of sets already turned in (we might actually want to suppose how many will be turned in by the time we complete the set, once our models are getting better), but also based on the number of cards we currently own. See, first let's suppose that nobody else is turning in any sets; that is, the number of armies our set is worth is constant, A. If we have two cards, either they are the same or they are different. In either case, only one of the three symbols will complete our set, so the probability of getting a set is the probability of getting that card (assumed 1/3 before for simplicity, but more on this later). Then the value of getting a card is A/3 armies. However, if we have three cards, we must have two of one kind and one of another (left to the reader to verify). Then either the card that we don't have or the card that we already have two of will complete the set, so the value of a card is now 2A/3. Along the same line, if we have four cards and no set, the value of a card is A, as any card will complete the set.

Since this post is already long, I will get to some more points in a later post. Remind me about continents, specifically the Australia-first theory, border minimization, and army inflation if you are interested and I forget to post. Also remind me to continue numbering from 3.

No comments: