Wednesday, August 26, 2009

Updated

I was doing Risk calculations and got a probability greater than 1, so something was wrong but I don't feel like messing with it now. Also a pretty poor lecture episode of SVU. On the bright side, that country song is almost done.

Update: I got the prob. for 2-2 dice rolls. At least I think so. I think the problem was with the summations I was using, as it involved a double summation, which is notoriously easy to mess up, so I did the summation more or less by hand and got some results that seemed reasonable to me. I could theoretically check my work experimentally, but since the probabilities I found unsurprisingly have denominators of 6^4 (there are four dice involved, after all), checking would be rather tedious.

The real problem is moving on to 3-2. The method can be generalized to any number of dice, which should come as no surprise, but it would be much easier if I could get the summations to work out. Generalization isn't even very hard once you have the mechanism for "choosing" the higher dice, especially if you stick to only the top two dice mattering. Actually, in trying to work out how to do it, I was thinking about dice with fewer sides so that I could check my work along the way. I got to thinking what a two-sided die would look like, and then I realized it is a coin.

So, some notions about that. When I say that the probabilities I found seemed reasonable, I mean that they met with the intuitive (and experimental by playing) notion that the defense has a slight edge (due to ties going to the defender) and that it is most likely to end with winning one and losing one as opposed to winning or losing two, but that none of the probabilities are especially high or low (nothing like 90%).

If we look at our theoretical two-sided dice, we see that each person, defender and attacker, can throw one of four possibilities, equally likely; 1,1; 1,2; 2,1; 2,2. Let's take those to be the defender's dice and look at the likelihood given each of the attacker losing two [this is my method].

1,1. If the attacker throws 1,1, he loses two by tying, but other cases yield at least one win.

1,2. If the attacker throws 1,1; 1,2;or 2,1 he loses two since we pair the highest dice and ties go to the defender.

2,1. This is the same as the previous case, 3/4 cases mean a loss of 2.

2,2. In all cases, the attacker loses two.

As I mentioned, each of the defender's throws are equally likely, so we can say the probability of losing two is

P(lose 2) = 1/4 (1/4 + 3/4 + 3/4 + 1) = 11/16.

An astute reader might note that since P(lose 2|(n,m)) = P(lose 2|(m,n)). One might also note that then we can simplify the process by combining the cases (n,m) and (m,n) by assuming n is greater than or equal to m and double its probability of occurring. This only works for when n != m since, for example, there are two ways of throwing (5,3), that is, a five and then a three or a three and then a five, but only one way of throwing a (4,4). Those probabilities work out to 2/N^2 and 1/N^2 respectively if N is the number of sides on the die.

Anyway, what I wanted to talk about was that in the case of N = 2, we see the defense has a huge advantage. You can work out that P(win 2) = 1/16, so P(win 1) = 1/4. The probabilities aren't so skewed if you use standard six-sided dice. It would seem that if you increase N, you lessen the defender's advantage, which makes sense if you think about it, because the advantage only comes into play given a tie. If people are throwing 20 sided dice, for example, the chance of a tie is pretty low, so the advantage is basically negated. We wouldn't expect it to change much though, that the most likely event is a "tie" of win 1, lose 1 because even if we stretch out the number of possible rolls, the probability of both being high or low is low. You can think of it like an area problem, I suppose. It's kind of hard to explain the mental picture here.

Tuesday, August 25, 2009

A Small Bobby D Post

Here's a new article about Mr. Dylan that has a short audio clip of him talking. Apparently he might be doing the voice of a GPS device for some car company. Or, maybe he's just messing around. Who knows?

Saturday, August 22, 2009

Risky Business

I am going to talk more about Risk, so stop reading now if that bores you.

In the last post, I talked about coming up with some specific general principles for playing the game, and I probably have some 'splainin' to do about that. So, here goes.

Risk is not a solvable game. What I mean by a solvable game is a game for which there exists an algorithm that guarantees winning (or at least not losing) in any case. Tic-tac-toe, for example, is solvable in the sense that there is an algorithm that would look something like this

put X in center square

if O in ... square, put X in ... square

else ...

and so on and so forth. I am sort of appropriating and abbreviating some CompSci terminology here. It should be pretty clear, then, that for a solvable game, one should be able to write a computer program (which is an algorithm of sorts) to win at the game.

Not all games are solvable. For example, games of chance are by definition not solvable because they involve chance. There's nothing you can do in a fair game of craps to win every time; you depend on the dice. But not even all games of skill are solvable. A game such as poker is unsolvable because it depends on determining what an opponent will do in a given situation, which can't really be sussed out by an algorithm. I'm just handwaving my proofs here, but these are just some introductory comments anyway.

Risk is actually somewhat similar to poker in the respect that a large part of it is determined by what an opponent does. It is actually even further from solvable (note that this term is technically useless, but makes sense intuitively to an extent) because while both games involve an element of chance, in poker, even if chance is against you to some practically impossible degree, such as always dealing you the lowest possible hand, you can at least bluff your way out of it (assuming your opponent is unaware of your reality-bending bad luck). In Risk, if you lose all your dice rolls, you just lose, no matter how wisely you chose them in the first place.

That said, an experienced Risk player (or strategy game player in particular) will know that there are certain strategies that are more likely than others to yield positive returns. A simple example is to build up the territories that border opponents' territories, rather than ones that border one's own territories exclusively. The reason for this should be obvious, so I don't plan on investigating too thoroughly this principle, though one certainly could.

I thought I would just list some general principles and discuss how I am thinking about addressing them.

1. Be conservative in attacking.

In working with the probabilities of dice rolling, I am already taking the first step in mathematically grounding this vague principle. By just determining the probabilities for the various outcomes of various dice matchups, we can see the rather obvious idea that it is better to attack with more dice rather than fewer, and when your opponent can use fewer rather than more. The next step in this analysis is to extend from single dice roll to an analysis of an entire territory versus another, until one side loses all armies. This really just means applying the already known dice roll probabilities over and over again, which would hopefully lead to a convenient probability distribution, and thus expected value for the end state, but this is really a job for a computer. Already with just the dice rolls, computation is a pain for more than the fewest of dice, so iteration is going to be a huge headache.

There is another aspect to being conservative in attacking that is related but much harder to analyze. That is the aspect of how long a campaign should be. By campaign, I mean series of attacks, usually in a chain. It is clear without working out the probabilities that one's chances of success (retaining ones armies) decrease the more times one uses the armies to attack, and is further stretched on a campaign by the fact that at least one army must be left behind when taking a new territory. That alone is not a full order of difficulty harder than the previous problems, but when one considers defending territories after the turn, we find that we are really considering a different kind of problem, one that involves graph theory. I'll get back to this later.

2. Take at least one territory per turn.

This may not be so obvious to people who haven't played in a while because they may have forgotten about sets. When a player takes at least one territory in a turn, that player earns the right to take one card at the end of the turn (only one card, regardless of how many territories are taken). The cards have one of three symbols on them, and there are two wild cards. A set of either three of a kind or one of each yields armies when turned in at the beginning of a turn, with the number of armies yielded increasing with each set turned in (by anyone, not just said player). The increase in armies is quite rapid, meaning that sets (and thus cards) become valuable rapidly. For instance, the third set is worth 8 armies, which is more than the entire continent of Asia is, though, of course, each set is one-time-deal.

This seems a fairly straightforward proposal, but we could, of course, analyze it a bit with some probability. It would not be too difficult to determine the value in terms of armies of getting a card by determining the likelihood of completing a set with each card. A computation annoyance rather than difficulty arises in that the total number of cards is 44, that is, 42 territory cards plus 2 wildcards. Also, since other players are also getting cards, which we cannot see, we can only guess as to the distribution of cards left in the stack. Since this forces our guess at a probability to imprecision anyway, we might as well ignore wild cards and presume our likelihood of any given symbol is 1/3. We've now simplified the computations quite a bit, on the plus side.

One interesting thing to note is that the value of a card will change, not only based on the number of sets already turned in (we might actually want to suppose how many will be turned in by the time we complete the set, once our models are getting better), but also based on the number of cards we currently own. See, first let's suppose that nobody else is turning in any sets; that is, the number of armies our set is worth is constant, A. If we have two cards, either they are the same or they are different. In either case, only one of the three symbols will complete our set, so the probability of getting a set is the probability of getting that card (assumed 1/3 before for simplicity, but more on this later). Then the value of getting a card is A/3 armies. However, if we have three cards, we must have two of one kind and one of another (left to the reader to verify). Then either the card that we don't have or the card that we already have two of will complete the set, so the value of a card is now 2A/3. Along the same line, if we have four cards and no set, the value of a card is A, as any card will complete the set.

Since this post is already long, I will get to some more points in a later post. Remind me about continents, specifically the Australia-first theory, border minimization, and army inflation if you are interested and I forget to post. Also remind me to continue numbering from 3.

Friday, August 21, 2009

Anybody Want to Play Some Risk?

So, I like the board game Risk but nobody else seems to, or well, nobody who is around at the moment. This means I have to play it on my own if I want to play it at all, which is actually ok because while I am pretty good at it, I have been wondering if I could come up with some specific general principles for playing the game. It is kind of hard to explain what I mean by "specific general principles," but what it amounts to is finding a good way of modeling the game mathematically. If you are not familiar with the game, I would suggest looking it up on Wikipedia or something because I don't feel like explaining it and probably you aren't interested, anyway.

It is intriguing mathematically because it seems as though it can be modeled partly through graph theory and partly through probability. Since battles are decided in the game by dice rolls (not just a simple 1-1 roll; it is a bit more complex, if you are not familiar), and dice rolls are in generally pretty easy to figure probabilities on, that seems like a reasonable starting point. If you have played the game, you know that it is unwise to attack territories when you have few armies, not just because you need to conserve armies in general, but also because the number of dice you can roll is dependent on the number of armies you are using to attack or defend. So, one general principle that Risk players would be familiar with is the principle of deciding whether or not to attack using the number of dice available (it is complicated by the move-at-least-as-many-armies-as-dice-rule, but that is sort of a minor consideration, and the number of dice one should use is not determined only by the probability; I am merely figuring part of the principle here). In finding the probability, I can determine better exactly how many dice one should use, rather than just a general principle.

As I said, probability with dice is generally not too hard, but there are a couple things to keep in mind here.

1. Ties go to defenders. This point is pretty easily compensated for when computing probabilities, as it just means changing a greater than to a greater than or equal to.

2. When someone uses more than one die, the highest attacker's and defender's respective die are compared, then the next highest. This complicates computation considerably. I will try to explain. I will notate P[x](y) to mean the probability of rolling a value of y with x dice. Note that that is probably not good notation, since what does it mean to roll y with 2 die? Does that mean one y? At least one y? All y? A quick and natural way of thinking is let y be a of the form (y1, y2, y3) for x = 3, (y1, y2) for x = 2 etc. Anyway, that will suffice for the moment.

With one die versus one die, we can figure that

P(attacker wins) = [SUMn=1|6](P1(n)*P1(n; that is 5/12+5/12+1/6=1. If you didn't follow all that notation, don't worry. The point is that this calculation is easy and requires at most a linear sum, which is easy enough that Gauss did it at eight years old. I know I have mentioned that before.

Now, let's consider a more difficult problem. What are my probabilities for winning with two dice versus two dice? If we would like to follow or previous solution's path, then we could say what is P2(n) and what is P2(
SA>SD, IA>ID -> win 2
SA>SD, IA win 1, lose 1
SAID -> win 1, lose 1
SA lose 2

What I mean here is S = higher of the two dice (S from superior) I = lower of the two dice (I from inferior). The case I=S is allowed, actually. A and D stand for attacker and defender respectively. "Win" and "lose" are from the attackers point of view, arbitrarily.

The advantage of approaching the problem from this direction seems to be that finding P2(S=n) is actually very simple. It turns out to be (2n-1)/36. To get this result, I just drew a grid of coordinate pairs and found which one had the higher die as n. If you do this, you will notice you get an L-shape (possibly rotated depending on where you start numbering) of length n. That means there are 2(n-1)+1=2n-1 squares in the grid with n as the higher of the pair, and clearly 36 total pairs. From there we can figure probabilities for SA>SD and SD>SA, give or take some equal signs, but hopefully you can fill in the blanks. Of course, if we want to know in general
P(SA>SD), we need to use a sum with n ranging from 1 to 6 as before. That is still only half the battle, since we have to figure the I's into our probability. That is, we are at a median step. It's not entirely useless, one might note, to note P(SA>SD) on its own. It does at least provide an upper bound for P(win 2) and a lower bound for P(win at least 1). Anyway, I haven't worked out much more, but that is merely because I don't like computing things too much.

You might notice I skipped the 1-2 and 2-1 cases here. They are actually fairly simple and you can derive them relatively quickly since you don't really have to worry about pairing dice, just figure P1(n)P2(
There remains the cases 3-2 and 3-1. If you figured 2-1, then 3-1 should be easy enough since only the first factor is changing. 3-2 is similar to 2-2, and all that should change is figuring P3(S=n) and P3(I=m|S=n). Don't worry about this notation, either. If you work out the problem, you will see what I am talking about, no doubt.

I will probably get to talking about some graph theory thoughts I have had about the game, but they are currently even less organized than these on probability.

Thursday, August 20, 2009

More Law & Order Commentary

Watched awesome episode of Law & Order today. Jeffrey Tambor was an incompetent judge presiding over the case of a senator, played by somebody I should recognize but forgot. The defending lawyer was the guy who now plays scruffy cop on the current season. It was like a wormhole in the L & O timeline.

I have been thinking about it, and I think there are a few reasons I actually like Law & Order and can't stand other dramas.

1) Each episode manages to tell a whole story, so I don't have to wait forever for the conclusion of plotlines I don't care about.

2) The characters are not universally irritating. The motivations for the recurring cast are almost irrelevant because they are just doing their jobs, and the motivations for the suspects, etc., just make sense.

That I guess leads to

3) The writing is just better. Usually the second half of each episode centers on some sort of interesting (some readers might say "gimmicky") legal argument, so it's not wholly dependent on "what happens to so-and-so" type stories, which depend on you liking, or at least caring about, the character. The first half of the show is generally just pretty solid mystery-ing.

To expand on my points in a rambling and unstructured way, I'd like to mention that we do get to see more details about the main characters (that is, DAs and detectives) rolled out over the course of many episodes, but it's not generally essential to the plot, and I think the characters are actually more endearing because we are seeing them work and trying to figure out stuff along with them rather than just having their stories shoved at us. Maybe it is a Japanese way of thinking, but I feel a greater connection with the ADAs, whose lives we see very little of outside of the office than with people on other tv dramas who spend their time talking about their messed up childhoods or lost loves and the like.

Law & Order is clearly the best brand of the three that are still on (it is also better to Trial by Jury, I think, but I didn't see much of its lone season), and I think the reasons I talked about before show why. SVU focuses far too much on each of the detectives' overwrought backstories. For example, the episode I watched last night was just a story about Eliot's daughter and his mother who both have some sort of mental illness. It was the kind of story that if it happened to someone in real life would be tragic, but as it was, was just kind of a boring hour of poorly lighted emoting. That is another strike against SVU, which is sort of unrelated to my previous points. It's way too dark, not in subject matter, but in the sense that it looks like the whole show is shot using only a flashlight for lighting. This helps obscure Mariska Hargitay's face, though, so that is a plus (she is ugly and looks like a dude).

Criminal Intent is really a different kind of show. It's pretty much like Sherlock Holmes if Holmes were living in present day Manhattan and also basically a mental patient. The real selling point of the show is watching Gorin twist up his face and body, then own some suspect through psychology or pick out some bizarre clue. It's in no way realistic, but pretty great. Eames makes a great Watson, too. I've only seen one episode with Jeff Goldblum, but he seems pretty great thus far. The Chris Noth episodes were meh, but mostly due to none of his partners having any personality whatsoever. I don't know if that was mostly a writing thing or if they just look shabby compared to D'Onofrio, who is just plain awesome.

Wednesday, August 19, 2009

Happy Jack

I just watched an episode of Law & Order, original recipe, which ended with Jack McCoy meeting his daughter (?) for dinner, and then he smiled. That is the only time I think that has ever happened on that show. Amazing.

Saturday, August 15, 2009

"Live" blogging

For some reason it is funny to me to liveblog a rerun of a show. That show is SVU, which USA likes to show in huge blocks, leading to me having seen pretty much all of them.

7:05 - They are looking for someone named Anika (sp?) using cell phones. Stabler got in a plug for some kind of 911 thing with cell phones. SVU loves putting public services anouncements in their dialogue as clunkily as possible.

7:08 - Oh noes! She was pregnant! I didn't see the beginning of this episode, so I have no idea what's up.

7:09 - Benson brushes off somebody's question like always. Amazing police work. By police work, I mean being a jerk and making a big show of being offended by crimes.

7:10 - Circle camera!

7:11 - This may be the legendary episode where the Asian tech guy gets to flip out. I've seen it, but I forget which one is which. I just remember thinking it hilarious that the writers decided this minor character should get his own spotlight episode. Usually we get nonsense about Stabler's marriage or Benson not being able to find a relationship because of her job [actually it is because she looks like a dude]. The fact that this guy who basically just runs audio programs on a laptop gets so attached to a case is just hilarious.

7:12 - Really I am just annoyed with Miller and Bud commercials now because they keep trying to convince us that they are good beer. They should really just say, "You will buy this because it is fairly cheap but doesn't taste like Steel Reserve." I would respect them a lot more if they were honest about it being pretty bad but still alright if you don't really care about how it tastes. Beats the Beast any day.

7:16 - I really hate Ben Stein now, too. He keep advertising for one of these "free" credit score sites that is no doubt a scam, though I haven't figured out how yet because I don't care. If you didn't know, he is a vocal creationist, which should tell you that he is a conman or an idiot or both, so I wouldn't suggest using whatever crap he is trying to shill.

7:20 - The budget at SVU must be really tight. They don't seem to be able to afford to turning on the lights EVER.

7:22 - Cragen is always under pressure from the brass. That must really ruin his otherwise cushy job of standing around in his office and telling the detectives that he "wants this guy."

7:25 - Looking inconspicuous in your long coats and black sunglasses, ENTIRE POLICE DEPARTMENT.

7:27 - So that guy is also dead. I'm thinking it is all an overly complicated conspiracy. Commercial break.

7:29 - How many times can they redesign this Nasonex bee without ever making him endearing or not creepy?

7:30 - Finn: "Prints on the dead guy came back!" Ha ha

7:37 - Sorry for the lack of updates. Firefox stopped cooperating for a few minutes. In the meantime, they've managed to arrest the guy who apparently had a guy kidnap his ex-girlfriend and then push a "kidnapper" into a moving car during a payoff. Paula Dean is telling me how easy it is to cook tenderloins in a bag.

7:40 - "I didn't kidnap Anika (?)" A likely story. They found BROCHURES in your HOUSE, dude. Lock him up.

7:42 - Benson: "Baby, baby, baby, baby..." I am paraphrasing. Stabler: "I am going to punch someone I am so angry."

7:44 - Stabler: "She's definitely going to put her money where her mouth is." A joke is too easy here because he is talking about a rather large woman.

7:47 - I'm tired of commercials bossing me around. No, TV, I won't "chill out with Coke products." Please phrase your shtick in the form of a question.

7:50 - Coming this fall, Stabler and Finn are Law & Order: Beach Patrol.

7:51 - I don't think I've seen a single episode of this show where Stabler has failed to mention that he has four kids. I can't help noticing he doesn't seem to own a single World's #1 Dad T-shirt, though. hmm...

7:52 - On TV they always make a big deal of how hard it is to deliver a baby, and then they end up doing it in about 30 seconds by the power of saying "push!"

7:55 - I take back my comments that Mariska Hargitay is a dude. I now believe she is some sort of alien sent to earth to "act with her eyes" by moving them back and forth as if there were a permanent fly in front of the camera.

7:58 - Another hour of police work wasted. I could have told you who did it at the beginning: stodgy, fat, rich lady.

Well, that was fun. Saturday, Saturday, Saturday night's alright!

Friday, August 14, 2009

Stripes

I am watching Stripes. It's a pretty good movie. I think the cherries in the kitchen are attracting little flies. Biteface is sleeping on the couch, getting his blanket all covered with hair. A fascinating day, indeed.

Thursday, August 13, 2009

TV post

Again I was going to liveblog a rerun of Law & Order, but TNT decided to show golf instead this afternoon, so I didn't get to. Then I forgot to do it for one from last season. Since Dan was posting about TV, though, I will, too.

Law & Order is great and pretty much always has been. Tonight's episode was lame because the motivation was really far fetched and there's no way that people would remember minor incidents from twenty years ago, but the plot hinged on them doing just that. Also, the guy who took Jack's place continues to be a shallow imitation of Jack. Also also it is sexist that his assistant DA wasn't promoted ahead of him. But, whatever, fat black cop and scraggly white cop make a good detective pair. I don't always remember the characters' names on that show, but if you watch it, you can probably tell who I mean.

Wednesday, August 12, 2009

First Post in a While

I was going to "live" blog a rerun of Law & Order today, but blogger was not cooperating, so I couldn't. It is too bad. Because this was an episode with Alana De La Garza who is very cute even though she has kind of an alien face, and now you are all missing out on my insights. My heart belongs to this No! Drug girl, however. That poster was up in the BOE and I looked at it a lot over my last week or so because I had nothing to do and she is mesmerizing.

Monday, August 3, 2009

Maze Problem

Have you ever read a Games Magazine? I guess "read" is not the proper word. It's a magazine of puzzles, like crossword puzzles and whatnot, so just reading through it wouldn't be very enjoyable. If you are a member of my family, and since you are reading this blog, the odds are pretty good you are, you have at least tried some of the puzzles. Today I am going to talk about a puzzle that I liked doing because it's a math-y thing and I'm pleased with myself for having solved it. If Will Shores or whoever wants to complain, then he can email me and I'll take this down. I sort of doubt that will happen.

Anyway, on the cover of this latest issue, there is a puzzle entitled "Lost in the Pyramid." It is a 7 x 7 grid of squares in different colors. The object is to get from the center square to one of the edges, with the path touching one and only one of each color square along the way. The last square, then, has to be one of the edge squares, but not one of the four corner squares. I have translated the colors into letters so as to make this possible to blog. Hopefully the formatting works out, but if not, you should at least be able to follow along by making your own representation. Here's what the puzzle looks like (the letters-color correspondence is obviously arbitrary):

ADDLLLT
AEJMTTT
AFJNOSU
BBJVQSU
CGKKQRU
CHIPQRR
CHIPPPR

Obviously not quite square when I type it like that (I think J's are too thin), but hopefully you get the idea. You start at the V, and have to draw a path touching one of each letter and end at an edge. I am going to refer to each square with a redundant nomenclature of X(y, z) where X is the letter and y and z the row and column number, starting from the top left [A(1,1)] and proceeding to the bottom right [R(7,7)].

I'll get to the solution, but since you might want to figure it out yourself, I'll include a picture break here.


Mmm, Stag. I'm not sure why this beer brewed in Milwaukee is the local beer of choice, but it seems to be.

My method for solving this was to turn it into a graph theory problem. That is, each square is a vertex, and adjacent squares are connected by edges. We proceed by removing edges and vertices until only one path remains. So, initially, the graph should look like a big grid [I would be a poor explainer of graph theory to note that you can draw the graph in any shape you'd like as long as you preserve the relationship between vertices and edges], which is what it is. The obvious first move would be to remove all edges between two letters of the same kind, since no path could contain two of the same letters. There are too many to list here specifically and anyone should be able to do this step on their own. Another convenient preliminary step is to mark all squares which are the only ones with their corresponding letters. That may be confusing wording. If there is only one of a letter in the grid, mark it, as the path necessarily contains that square. That is E(2,2); F(3,2); M(2,4); N(3,4); O(3,5); G(5,2); V(4,4). I am using bold here to represent a vertex we know (or are showing) to be in the path. I will try to keep that notation going throughout.

Note that some of the squares we have marked as included in the path are adjacent. One might be tempted to suppose the path runs from one marked square to the next, but take heed that this is not necessarily true. Note that N(3,4) is adjacent to both M(2,4) and O(3,5). We can't distinguish which of these it will be, if it is even one of these at all. The path could even pass through J(3,3); we don't know if it is in the path or not. Now, onto the parts that require a bit more reasoning. I am going to label my steps so that I can refer to them like a real mathematician might.

1. If we look at A(1,1), we can see that there is only one edge for us to use [we already eliminated the A-A edge]. Then A(1,1) must be the final vertex on our path. However, it is a corner vertex, and, thus, cannot be final. So, we may eliminate A(1,1) and the corresponding edge. The same process can be used to remove T(1,7); C(7,1); and R(7,7), all our corner vertices.

2. Examine R(6,7). Only one edge remains, to U(5,7). From that vertex, only one other edge remains, to R(5,6). Then any path through R(6,7) contains two R's; we may eliminate R(6,7). I won't mention eliminating the edges from an eliminated vertex anymore, this should be obvious.

3. This is really a key step, so if you are paying attention, pay attention. U(4,7) and U(5,7) have only one edge and are possible final vertices. U(3,7) has only two, one of which leads to T(2,5), which similarly can only be final. Thus, if a U is contained in the path, either that U is final, or T(2,5) is final. Since a U is necessarily contained in the path, we have determined that our final vertex is one of these four vertices [a U or T(2,5)]. Then obviously, no other vertex can be final.

4. By applying (3), we may now remove any vertex along the edge of the grid (sorry for the edge/edge) confusion with only one edge. That is A(2,1); H(7,2); P(7,6); D(1,2); C(6,1); L(1,6).

5. Another application of (3) yields I(7,3); L(1,5); and P(7,5) out.

6. Another application yields P(7, 4) out.

7. R(6,6) now has only one remaining edge but can't be final, as it is not along an edge, so we may eliminate it, as well.

8. We have removed vertices, so we may mark some others as in the path as in our preliminary step. D(1,3); L(1,4); A(3,1); C(5,1); H(6,2); I(6,3); R(5,6); P(6,4).

9. Since A(3,1) has only two edges, B(4,1).

10. Then B(4,2) is out.

11. E(2,2) has only two edges, so J(2,3).

12. Then J(3,3) and J(4,3) are out.

13. As in (6), Q(6,5) is out.

14. We know C(5,1)-G(5,2) is in our path. Assume G(5,2)-K(5,3). Then K(5,3)-I(6,3). If we continue to P(6,4), then the path contains no H (it is impossible to get to). If, instead, we suppose I(6,3)-H(6,2), we have essentially cut ourselves off and must end at H, which is impossible. Thus G-K is impossible, so we must have G(5,2)-H(6,2)-I(6,3)-P(6,4).

15. Then K(5,4).

16. Since each vertex in the obvious path E(2,2)-...-P(6,4) has only two edges, we can confirm our suspicions that J(2,3)-E(2,2)-...-P(6,4)-K(5,4) is part of our path (a sub-path, I suppose). Note, we don't yet know the order, that is, which end connects to the V, and which to the edge.

17. From J(2,3), we have two options, D(1,3) or M(2,4). If we assume the latter, we have cut off our only D and L, so we must have J(2,3)-D(1,3)-L(1,4)-M(2,4).

At this point, the solution is close enough that one could probably guess one's way to the end, but I have my reasons for wanting to continue in this fashion, despite the next two steps being quite involved in terms of logic trees. Regardless of where we look, we are going to have to start using more complicated logic trees to find contradictions, etc., so let's just start at the beginning, that is V, and try each of the wrong first steps. By eliminating them, we leave ourselves with only the correct solution.

18. Assume V(4,4)-Q(4,5). Then either Q(4,5)-S(4,6) or Q(4,5)-O(3,5). The former yields two short (they leave out letters) paths, both to U's, so we must have Q(4,5)-O(3,5). Passing to S(3,6) again leads to a short path, so that possibility is out. Passing to T(2,5) means leaving out N(3,4), so that is out. Our final possibility here is V(4,4)-Q(4,5)-O(3,5)-N(3,4)-M(2,4)-...-K(5,4)-Q(5,5), but this path contains two Q's and is thus out. With all possibilities eliminated, we conclude V-Q is out.

19. Assume V-K. Then we must have V(4,4)-K(5,4)-...M(2,4). Passing to T(2,5) again leaves out
N(3,4) as in (18), so that is out. Passing to N(3,4) leaves out T(2,5), so T(2,7) must be (the final step) in the path. Then working backwards we have O(3,5)-S(3,6)-U(3,7)-T(2,7). One more step back leads to either N(3,4) or Q(4,5). If we suppose N(3,4), then our path is set and leaves out R, so that possibility is out. Stepping back to Q(4,5) leads to another V-Q, which is out by our assumption V-K. Then our assumption [V-K] is out, so we must have V(4,4)-N(3,4).

That was an awful lot of work to show one edge between two vertices we already knew were in the path. It gets easier, though.

20. K(5,4) has only two edges, so K(5,4)-Q(5,5); then Q(4,5) is out.

21. Similarly, Q(5,5)-R(5,6).

22. Then either U(4,7) or U(5,7) must be final, so we may remove T(2,7) and U(3,7).

23. As in (6), we may remove S(3,6).

24. Then S(4,6) and R(5,6)-S(4,6)-U(4,7).

25. Then U(5, 7) is out.

26. T(2,5).

27. We now know every vertex in the path, and all but a few edges. We just need to determine how to get from N(3,4) to M(2,4). As in 17, we can easily verify N-O-T-M is the only path with all the letters.

That's it, then, we have the full path:
V(4,4)
N(3,4)
O(3,5)
T(2,5)
M(2,4)
L(1,4)
D(1-3)
J(2,3)
E(2,2)
F(3,2)
A(3,1)
B(4,1)
C(5,1)
G(5,2)
H(6,2)
I(6,3)
P(6,4)
K(5,4)
Q(5,5)
R(5,6)
S(4,6)
U(4,7)

It took a bit, but it's worth it, right?