The game Connect Four is a pretty neat little game, which was solved back in 1988 by two different individuals: James Allen and Victor Allis. It makes a pretty nifty benchmark too, called Fhourstones. My modern desktop evaluates over 10 million positions per second, and can solve the entire game in about 3 minutes. Pretty cool.

# Category: Games and Diversions

## Rules of Awari

Jonathan Schaeffer used to have a page that contained the rules for Awari that are typically used in computer play. Through the miracle of the Internet Wayback Machine, you can look them up here:

## Counting Awari Positions…

Previously I wrote a simple program to compute the number of positions that are legal in Checkers. I thought I might perform the same analysis for the game of Awari. The simplest approximation that you can get is by counting the number of ways that n undistinguished stones can be distributed in m distinguished pits. A few minutes scratching (or searching the web) will show you that n undistinguished balls can be distributed in m distinguished pits in comb(n+m-1, m-1) ways. Awari begins with 48 stones and 12 pits, so if we sum all the totals, we’ll get the total number of positions. Well, it is an overestimate. First of all, there are no legal positions in Awari with 47 stones, since the first capture either captures 2 or 3 stones. And, there are many positions in here which cannot occur in the game. For instance, in the real game, the non-moving player can never have stones in every pit (except the first move, of course) because he would have previously had to empty one pit to sow it, and that pit is always skipped when you sow. And these totals don’t include any kind of “reachability” analysis: many nodes can’t be achieved in play.

According to Romein and Bal’s work, there are 889,063,398,406 positions which are reachable in Awari. Writing up my naive totaller gives the following totals:

-------------------- 1 12 2 78 3 364 4 1,365 5 4,368 6 12,376 7 31,824 8 75,582 9 167,960 10 352,716 11 705,432 12 1,352,078 13 2,496,144 14 4,457,400 15 7,726,160 16 13,037,895 17 21,474,180 18 34,597,290 19 54,627,300 20 84,672,315 21 129,024,480 22 193,536,720 23 286,097,760 24 417,225,900 25 600,805,296 26 854,992,152 27 1,203,322,288 28 1,676,056,044 29 2,311,801,440 30 3,159,461,968 31 4,280,561,376 32 5,752,004,349 33 7,669,339,132 34 10,150,595,910 35 13,340,783,196 36 17,417,133,617 37 22,595,200,368 38 29,135,916,264 39 37,353,738,800 40 47,626,016,970 41 60,403,728,840 42 76,223,753,060 43 95,722,852,680 44 119,653,565,850 45 148,902,215,280 46 184,509,266,760 48 279,871,768,995 -------------------- 1,171,666,558,334

which indicates that approximately 1/4 of all the positions enumerated here are illegal or unreachable. Still, the ease of indexing using this naive idea may compensate for doing anything more compact but more confusing.

## Mancala/Oware/Awari

Checkers is driving me nuts. I thought maybe I should implement a different board game, just for variety. Chess and Othello seemed obvious, backgammon might be fun, but games from the Mancala family also seemed interesting. What’s more, you can even buy a cheap Chinese-made Mancala set at Target. Which I did. Apparently Awari has been solved completely, resulting in a complete 48 stone database which acts as an Oracle. I’m not sure that matters to me. Anyway, just thought I’d write this down, so when a year passes and I haven’t done anything, I’ll be nudged into thinking about it again.

## Classic Board Games for Computer Implementation?

So, my experiments with my checkers program Milhouse have been fun and interesting. There is still work to be done: I don’t think a machine that can’t properly play first position can be reasonably said to be a good checkers player (even though I still make errors while practicing it myself against Cake), but I’ve learned a lot. I’ve been toying with the possibility of doing a rewrite of Milhouse, but maybe that isn’t the best use of my precious after-hours time. For the last couple of days, I’ve been musing about the possibility of finding other strategic board games that might be fun to try. Here is a rough list of my internal criteria:

- It should be a game that people play. It would be great if there was some existing scholarship on human play, including test positions.
- It should be a game which is uncopyrighted and free from legal restrictions for use. Hence my use of the term “classic”.
- It should be reasonably hard. Trivial games like Tic Tac Toe need not apply.

There are some additional possibilities..

- I’m willing to consider games with a chance component (such as dice in backgammon) or a gambling component (such as the doubling cube in the same).
- I prefer games which are more abstract, whose difficulties arise from position and interaction more than complicated rules.

So, what games are out there?

Well, obviously there is **chess** and its variants, including **Shoji**, **Xiangqi** and a slew of modern variants. Originally I had a goal of implementing chess, but I must admit that it doesn’t appeal to me quite as much now largely because I view it as less abstract than games like checkers. The difficulty of First Position arises not because of complex interactions between different piece types, but because of the interplay of very simple elements. Still, I have considered the possibility of taking one of the variants of chess that are played on small boards perhaps a 5×6 board and developing opening and endgame databases for these. I haven’t done the math on what it would take to resolve the game theoretic value of one of these small games, but it might be tractable with the computing power that I have at my disposal.

There are obviously variants to **checkers**, including **international draughts** and the **misere variants**, also known as suicide checkers. The idea is that the person who loses all their pieces first *wins*. Martin Fierz has a version of Cake called *Suicidal Cake* which can play suicide checkers, which is an interesting variant with many of its own subtle problems. International draughts is a much more complicated game, with a larger board and some really interesting rules for jumping. There are other variants as well (Polish, Turkish, Italian) which could be considered as strong candidates.

**Othello** or **Reversi** are interesting games of position, and which can use many of the techniques we see in checkers and chess (namely opening books and endgame databases). There is also a fair amount of human knowledge about the game, which is good.

**Backgammon** introduces chance into the mix, which generally means that you need to use a different set of techniques than we’ve seen so far. I also like to play backgammon (although am only a middling level player). Backgammon has been the subject of quite a bit of machine learning research, and there exists many implementations to spar against.

**Hex** is a very nearly perfect abstract game: players alternate turns to connect their side of the board to the opposite side. There are no draws possible. The entire games can be displayed as a single graphic, which shows the order in which stones were placed on the board. The high branching factor makes traditional alpha-beta search less attractive.

**Go** has many of the same characteristics, with the added bonus of having lots of good human players. There has been a lot of interesting research recently about playing Go.

**Arimaa** was designed to be hard for computers to play, but frankly, I think that it does so only by making a rule system which is complicated. It seems between the push, pull, freeze and trap rules, there are at least two and maybe three that could have been eliminated. I also suspect that while it is (according to the authors) “easy to learn”, I suspect that really only means “easy to learn the rules”. I would make the (unsupported) claim that most human play is quite weak as well.

Games like **Awari** or other games of the mancala family have been studied quite extensively. **Connect 4** has been solved, but is still middling interesting.

Any other board games I’m missing? I await your suggestions and/or advocacy.

## A simple, naive implementation of a MasterMind solver

Okay, I woke up this morning, and decided to code up a version of a MasterMind solver. About fifteen minutes later, I had this tremendously slow implementation, which nevertheless seems to solve patterns very pretty quickly (averaged just 4.71 guesses in the short run of 100 trials that I just ran. It’s tremendously slow though, for reasons which I should have guessed before I wrote it. Nevertheless, it does work fast enough that if you wanted to write a Mastermind solver, it would work just fine.

Here’s the idea. Start by simply trying to guess the code by picking one at random. Assuming that you didn’t win, you need to generate a new guess. Let’s generate one and pretend that it was the secret. If you evaluate it against all the previous guesses you make, it would need give the same results. If it doesn’t, it can’t be the secret, so don’t use it as your guess. Try a different one.

That’s it. Of course the reason it’s slow is that when you get to the end, very few of the random choices you’ll make will match all the previous results (there are 1296 possibilities in the 6,4 version of classic Mastermind), so you might have to try lots and lots and lots of guesses before you find the consistent one. The better way would be to use a list of all possible codes, and strike out the ones in each round that are inconsistent with the previous guesses. Then you can select among all consistent codes. I’ll write up that optimization shortly, which should speed up the code by several hundred times. But this one does work… check it out.

#!/usr/bin/env python # # A naive implementation of the game MasterMind, to test some ideas # that I learned in a brief scan of some of the known literature. # import random from sets import Set # traditional Mastermind uses six different colors, not exactly certain # what they were, so I chose Red/Yellow/Blue/Orange/Green/Purple... colors = ['R', 'Y', 'B', 'O', 'G', 'P'] # generate a random secret code (or guess) def randomcode(): return ''.join([random.choice(colors) for i in range(4)]) def evaluate(guess, secret): b, w = 0, 0 gs, ss = Set([]), Set([]) for i in range(4): if guess[i] == secret[i]: b = b + 1 else: gs.add(guess[i]) ss.add(secret[i]) return b, len(gs.intersection(ss)) def solve(secret): history = [] guess = randomcode() nguesses = 1 while True: b, w = evaluate(guess, secret) history.append((guess, b, w)) print guess, b, w if b == 4: break # we solved it... # generate a new random guess, consistent with # all the previous guesses. nguesses = nguesses + 1 while True: guess = randomcode() consistent = True for g, b, w in history: nb, nw = evaluate(guess, g) if nb != b or nw != w: # nope, not consistent consistent=False break if consistent: break print return nguesses ROUNDS = 100 totalguesses = 0 for x in range(ROUNDS): totalguesses = totalguesses + solve(randomcode()) print "average number of guesses was %.2f" % (float(totalguesses)/ROUNDS)

**Addendum:** I coded up the optimization that I mentioned above, and it seemed not to help that much. Run time dropped from 21 seconds to 17 or so. Not very good. Perhaps I don’t know what’s going on here. Then again, I still haven’t had any caffeine this morning, so perhaps my puzzler isn’t quite up to speed yet.

## Investigations into the Master Mind Board Game

Over at the Programming Praxis blog, the task of the day is to write a program to solve the game Mastermind. Mastermind is actually a fairly interesting game mathematically speaking, and has a fairly rich set of mathematics behind it, and yet it’s actually small enough to analyze easily using modern computing power. I might try to use this as a test program to try out the new Go programming language from Google. In the mean time, here are some links:

Investigations into the Master Mind Board Game

Here’s a link to computer science luminary Don Knuth’s paper *The Computer As Master Mind*

## HexWiki

Work on my checkers program `milhouse`

has stalled a bit: I have a problem in the transposition tables that is fighting against my endgame database attempt, and it’s subtle enough that I haven’t had time to work it out. I’ve been relaxing by reading some more about other board games, and have begun to read a bit about attempts at games with very high branching factor like Go and Hex. I frankly don’t have the brain power for Go, so I thought I might think about hex a bit more. It’s just about as simple a game as you can get, and yet has deep strategy that seems more intuitive to me. Some useful information is stored on the HexWiki.

## Olithink: a small, competent chess program

I don’t know if any of you have noticed that you can see some of the things that I’ve posted on the current day in previous years on the left. I find it nice to recycle topics which I might have talked about in previous years and bring them back to the top of my consciousness. Back in 2004, I linked to two bits of sofware: the first, a fledgling internet telephone project called Skype (you may have heard of it since), but the second was Oliver Brausch’s 1600 line chess program called Olithink. In revisiting his chess page, I see he’s trimmed the size to 1424 lines, and he claims it now to be competitive with Crafty, even though Olithink doesn’t have an opening book or endgame databases. I fired it up against Crafty using Xboard, and with this single game, Crafty did beat Olithink, but I must admit that it held its own fairly well until making a mistake at move 48.

Here’s the pgn:

[Event "Computer chess game"] [Site "mark-vandewetterings-macbook.local"] [Date "2008.04.08"] [Round "-"] [White "Crafty-20.14"] [Black "OliThink 5.1.2"] [Result "1-0"] [TimeControl "40/300"] 1. e4 e5 2. f4 exf4 3. Nf3 g5 4. h4 g4 5. Ne5 Qe7 6. d4 d6 7. Nxg4 Qxe4+ 8. Qe2 Qe7 9. Qxe7+ Bxe7 10. Nf2 Nh6 11. Nc3 Nc6 12. Bxf4 Nf5 13. Nd5 Bd8 14. O-O-O Nfxd4 15. Ne4 Be6 16. Nef6+ Bxf6 17. Nxf6+ Ke7 18. Bg5 h6 19. Ng8+ Kf8 20. Nxh6 Re8 21. c3 Nf5 22. Nxf5 Bxf5 23. Bb5 a6 24. Ba4 Re2 25. Rd2 Rxd2 26. Kxd2 Be4 27. Rg1 b5 28. Bd1 Ne5 29. b3 Kg7 30. Rf1 Re8 31. Bf6+ Kf8 32. g3 Re6 33. Rf4 Ng6 34. Rf2 Ne5 35. Be2 Nf3+ 36. Bxf3 Rxf6 37. Ke2 Re6 38. Bxe4 Rxe4+ 39. Kf3 Re1 40. Rc2 d5 41. h5 Kg7 42. g4 c6 43. Kf4 Re4+ 44. Kf3 Kh6 45. Re2 Rxe2 46. Kxe2 f5 47. Kf3 Kg5 48. gxf5 c5 49. f6 Kxf6 50. Kf4 a5 51. a3 Ke6 52. b4 axb4 53. axb4 cxb4 54. cxb4 Kf6 55. Ke3 Ke5 56. Kd3 Kf6 57. Kd4 Kg5 58. Kxd5 Kxh5 59. Kc6 Kg6 60. Kxb5 Kf6 61. Kc6 Ke7 62. Kc7 Ke6 63. b5 Kd5 64. b6 Kd4 65. b7 Ke3 66. b8=Q Kd3 67. Qe8 Kc2 68. Kb6 Kd3 69. Kc5 Kc2 70. Qe3 Kb2 71. Kc4 Kc2 72. Qf2+ Kb1 73. Kb3 Ka1 74. Qb2# {White mates} 1-0

The better move would appear to be 48. … Kxf5.

Anyway, it’s a very neatly written program. Check it out.

Addendum: I left Crafty analyzing the above position for five minutes. Kxf5 does appear to be better, but it appears that Olithink is already almost two pawns down, even with that move. Crafty ranks the move that Olithink chose as being down by 4 pawns.

## Sudoku enumeration

Here’s a page that answered some of my questions about Sudoku. Sudoku enumeration

Now, if only it could answer the most pressing question I have: why am I so slow at them?

Technorati Tags: Puzzles, Recreational Math, Sudoku

## Pong is 34 years old today! Happy Birthday Pong!

According to digg, pong is celebrating its 34th anniversary today. Feh. I’m older than that.

Sniffle.

Bonus links: Wikipedia has some nice info, including a link to William Higinbotham’s Tennis for Two game developed at Brookhaven National Labs.

Oh, and the animated gif on the right? Screendumps from the CHIP-8 emulator I wrote a few years back.

## Visualizing the code of Atari 2600 games

Ben Fry has some cool graphics which visualize the code in several old Atari 2600 video games. Basically, he disassembles code and marks all possible branches with arcs between the lines of code, and changes all data tables to graphical representations of the bit patterns, revealing many sprites and other data tables. I dunno how useful it is, but it’s kind of neat.

## Tinkering with ~~Tinkertoys~~ Zometool

Zome is a nerd toy that allows you create all sorts of amazing polyhedral models. Some people are more serious about them than others.

Addendum: A couple of days ago, I picked up one of those generic “ball bearing and magnet” sets that allow you to build similar structures at Walgreens. It was $9.99 for a 250 piece set, which seemed pretty good. I assembled a stellated icosahedron, but I don’t really recommend the set: the magnets are incredibly weak, and it’s almost impossible to move the model without destroying it.

## Gutenberg Gem: The Boy Mechanic: Volume 1 by Popular Mechanics

Wow. *Very* cool to take this step back in time and see what young hackers in 1913 were doing. A lot of lame stuff, but some gems, like a line harmonograph, a “key card” for writing secret codes on post cards, handcutting gears and racks for models, a miniature “Pepper’s Ghost”, and a homemade water wheel. And that’s just in the first hundred pages, there are almost six hundred! Be sure to check out the PDF file.

## Gutenberg Gem: Amusements in Mathematics, be H.E. Dudeney

A classic of puzzles (most of which I would call only marginally mathematical) *Amusements In Mathematics*, by Henry Ernest Dudeney. has been made available via Project Gutenberg. Lots of puzzles having to do with geometric dissection, board games, and a host of other topics. Very nice.