Category Archives: Checkers

Milhouse defeats Chinook (on amateur level anyway)

Oddly Chinook seemed to know the end was coming before Milhouse recognized it. But sure enough, Milhouse managed to navigate itself to a victory.

[Event "Sparring Match"]
[Date "2009-04-27"]
[Black "Milhouse"]
[White "Chinook (Amateur)"]
[Site "Internet"]
[Result "1-0"]
1. 12-16 22-17 2. 16-19 24x15 3. 11x18 23x14
4. 9x18 28-24 5. 8-11 26-23 6. 6-9 23x14
7. 9x18 24-19 8. 2-6 17-14 9. 10x17 21x14
10. 4-8 31-26 11. 11-15 19x10 12. 6x15 25-21 
13. 8-11 29-25 14. 1-6 26-22 15. 6-9 21-17
16. 9-13 27-23 17. 18x27 32x23 18. 15-19 23x16
19. 11x20 30-26 20. 20-24 25-21 21. 7-10 14x7
22. 3x10 26-23 23. 24-27 23-19 24. 27-31 19-16
25. 31-26 16-11 26. 10-15 11-8 27. 26-23 8-3
28. 23-18 3-8 29. 18x25 17-14 30. 25-22 14-10
31. 22-18 8-11 32. 15-19 10-7 33. 13-17 21x14
34. 18x9 11-15 35. 19-23 7-2 36. 23-26 15-10
1-0

Sparring Milhouse versus Chinook

Well, I’ve mucked around a bit, and begun to add PDN (Portable Draughts Notation) logging to Milhouse. Here is a sparring match between the novice version of Chinook that you can play online versus Milhouse, running at around 15 ply search with a six piece endgame database. It seemed clear that the position was a draw at the end. Chinook seemed to think I had it on the ropes around move 19. Did milhouse miss a strong line, or is the novice level of Chinook misguided? I’m not sure.

[Event "Sparring Match"]
[Date "2009-04-27"]
[Black "Milhouse"]
[White "Chinook (Novice)"]
[Site "Internet"]
[Result "1/2-1/2"]
1. 11-15 22-18 2. 15x22 25x18 3. 8-11 29-25 4. 4-8 24-20 5. 12-16 26-22
6. 8-12 28-24 7. 9-13 30-26 8. 10-15 24-19 9. 15x24 18-15
10. 11x18 20x11 11. 7x16 27x11 12. 18x27 31x24 13. 12-16 32-27 14. 6-10 24-20
15. 10-15 11-8 16. 3x12 20x11 17. 12-16 11-8 18. 15-19 21-17
19. 16-20 27-23 20. 19-24 23-19 21. 24-27 19-16 22. 27-32 17-14
23. 20-24 8-4 24. 24-27 25-21 25. 1-6 16-11 26. 13-17 22x13
27. 6-9 13x6 28. 2x18 26-23 29. 18-22 21-17 30. 22-25 17-14
31. 27-31 23-19 32. 31-26 19-16 33. 25-29 14-10 1/2-1/2

Goofing around with Unicode…

I’m kind of old school. I tend to write a lot of programs with character, line oriented interfaces. Part of it is sort of an organized laziness: I’d rather spend my time working on “the real” part of the program. But sometimes, you realize that your program could be enhanced by a little nicer look. Recently, I’ve become interested in the potential of using Unicode to augment the kind of symbols that we can use in program output. I recently discovered that there are Unicode glyphs for checkers in the map. While they aren’t as nice as the postscript generated graphics that I’ve done before, they are nicer than just using R and W like in my conventional program. To test out what could be done, I made a little Python program that output the starting board. Witness:

picture-1

This is rendered with the GNU unifont, which don’t exactly have glamorous glyphs, but they are quite legible. I might actually add this to my checkers program.

Addendum: I wrote the above in Python, which has a lot of nifty support for Unicode strings. I thought it would be relatively simple to figure out how to do this in C, but as it turns out, it’s not as easy as I thought. Also, the checkers characters are double width, and editors like vim don’t seem to really like double wide characters that much. Harumph. I’ll figure out how to do this some other time.

Milhouse beginning to show signs of promise…

Well, I got move ordering implemented, and the performance on my two previous test cases seem much more comparable. And with this addition, milhouse is able to solve some fairly subtle problems, not just the toy one’s from Pike’s book of Checkers puzzles. For instance, consider this position:

board4

Red is to move and draw. Only one move leads to a draw.

Give up? Me too. I wrote a checkers program so I wouldn’t have to think. 🙂 I made some changes so it could do some rudimentary analysis of positions. Here’s the output:

         -7602 :  1-6  10x1  18-15  1-6  15-11  5-1  11-8   1-5   8-12  5-9   4-8   9-14  8-11  6-2 
         -7421 :  4-8  20-16  8-12 16-11 18-22 10-14 22-25 14-9  25-21  9-14 21-25 14-9  25-21  9-14
         -7600 : 18-14 10x17  1-6   5-1   6-10  1-5  10-15  5-1   4-8   1-5   8-11  5-9  15-18 20-24
         -7600 : 18-15 10x19  1-6   5-1   6-9   1-5   9-13  5-1  13-17 20-16 17-21  1-5  21-25  5-1 
         -7421 : 18-22 10-14  4-8  20-16  8-12 16-11 22-25 14-9  25-21  9-14 21-25 14-9  25-21  9-14
            +0 : 18-23 10-7  23-18  7-2  18-14  2-7  14-9   7-2   9-14  2-7  14-9   7-2   9-14  2-7 

Yep, 18-23 leads to a draw (by repetition). All others, doomed.

Milhouse still hides some mysteries…

So, I was doing some more testing this morning, and noticed an anomaly in my checkers program, Milhouse. Consider the following position:

LOADED puzzle 177: Testing move ordering..
Color is set to red.
           White
        +--------+
        | R - - R|
        |- - - - |
        | - - - -|
        |- - - - |
        | - - - -|
        |- - - - |
        | - - - -|
        |- - - W |
        +--------+
           Red

This has a 33 ply solution, which milhouse finds pretty easily, yielding the following statistics:

final score = 10000
transposition tables
... 1049502 gets (1015774 tier0 / 0 tier1 / 33728 failed)
... 757147 puts (33728 tier0 / 723419 tier1) (298250 L/365516 X/93381 U)
... 349949 chinook database accesses.
        1. 29-25
1.173 seconds elapsed.

Not bad. The caching works reasonably well, and I only need to access about 349K positions from the Chinook endgame database (which is basically the entire thing a couple of times over, which isn’t good, but not fatal). But let’s try the same position, but with the board rotated 180 degrees:

LOADED puzzle 178: Testing move ordering..
Color is set to red.
           White
        +--------+
        | W - - -|
        |- - - - |
        | - - - -|
        |- - - - |
        | - - - -|
        |- - - - |
        | - - - -|
        |R - - R |
        +--------+
           Red

Much to my chagrin, this took a lot longer, but also found a shorter solution (29 ply). Grumble!

... 78459032 gets (78280388 tier0 / 14207 tier1 / 164437 failed)
... 65025084 puts (164623 tier0 / 64860461 tier1) (19696902 L/36414475 X/8913707 U)
... 34724781 chinook database accesses.
        1. 1-5
55.834 seconds elapsed.

I’m guessing that move ordering plays a huge role in this (I haven’t implemented even the most rudimentary version). My move generator basically generates moves in a fixed order, which means that the entire search tree is searched in a consistent order. It’s not surprising particularly that this order doesn’t cause optimal pruning of the search tree. My guess is that implementing some form of move ordering is a good idea. I suspect I can do it by just searching the “best move” previously recorded in the hash table before all others. That should be pretty straightforward to implement.

I’m still a little concerned about the discrepency in the depth of searches.

I’ll ponder that some more later.

Some advances on Milhouse…

I finally got around to totally ripping out the old implementation of transposition tables, and installing a new one based upon hints I read about on various web pages, mostly originating with some of the ideas for cache management that Ken Thompson implemented for his chess program Belle. The idea is to implement two caches, one of which is used to store board positions near the root of the tree, the other which contains all other nodes. The idea is that if you cache near the leaves, then you are likely to need the same values again soon, but if you cache higher up in the tree, you are more likely to save yourself long searches.

Using this, I managed to get my “two king vs. one king” endgames working, searching out 33 ply in just a few seconds.

LOADED puzzle 173: Simple Tests (harder)..
Color is set to red.
           White  
        +--------+
        | R - - R|
        |- - - - |
        | - - - -|
        |- - - - |
        | - - - -|
        |- - - - |
        | - - - -|
        |- - - W |
        +--------+
           Red    

milhouse: depth 40
search depth is now 40
milhouse: playout
     1. 29-25 {10000}   1-5 {-10000}
     2. 25-21 {10000}   5-1 {-10000}
     3. 21-17 {10000}   1-5 {-10000}
     4. 17-13 {10000}   5-1 {-10000}
     5. 32-27 {10000}   1-5 {-10000}
     6. 27-23 {10000}   5-1 {-10000}
     7. 23-18 {10000}   1-5 {-10000}
     8. 18-14 {10000}   5-1 {-10000}
     9.  13-9 {10000}   1-5 {-10000}
    10.   9-6 {10000}   5-1 {-10000}
           White  
        +--------+
        | - - - -|
        |- - - - |
        | - - - -|
        |- - - - |
        | - - R -|
        |- - - - |
        | - - R -|
        |- - - W |
        +--------+
           Red    

milhouse: playout
     1. 14-10 {10000}   1-5 {-10000}
     2.   6-1 {10000}   5-9 {-10000}
     3.   1-5 {10000}  9-13 {-10000}
     4. 10-15 {10000} 13-17 {-10000}
     5. 15-18 {10000} 17-13 {-10000}
     6. 18-22 {10000}  13-9 {-10000}
     7.  5x14 {10000}
           White  
        +--------+
        | - - - -|
        |- - - - |
        | - - R -|
        |- - - - |
        | - - R -|
        |- - - - |
        | - - - -|
        |- - - - |
        +--------+
           Red    

milhouse: 

I then tried to play a game against the novice level of Chinook. I ended up in this position, with Chinook playing white and Milhouse black. Chinook kept bragging that I was in “serious trouble”, but the reality is that the position is drawn, and once in a six piece drawn end game, Milhouse won’t stumble (barring bugs) so I called this a draw.

board3

Alpha-Beta Search is Hard To Debug

So, I’ve spent a few hours over the last couple weeks to try to debug milhouse, my computer checkers program that I tinkered together over the last few years. It’s kind of maddening, because in so many ways it seems to work, but it has a number of rather maddening behaviors which can only be classified as bugs. Here’s an example. Consider this simple 2 K vs. 1K endgame (entirely typical, and the kind of endgame that I can even do).

board1

It’s easy, right? Sure enough, Milhouse finds the 14 move sequence that basically forces the White king to flee the double corner, where disaster awaits. The search is very quick, taking just a fraction of a second.

But here’s the thing: nominally more difficult puzzles seem to confuse milhouse entirely. Let’s modify the puzzle to make the winning sequence just a bit longer.

board2

Obviously, this is just 8 half moves (or plies) away from the previous one. (You can see that if White does anything but jigger back and forth, he does nothing but hasten his own demise.) But for some odd reason, a 25 ply search doesn’t discover this winning line.

So, here’s the thing: alpha-beta search is a lot like binary search. To write a bug free binary search is remarkably challenging. Try it sometime. There are all sorts of odd edge conditions you can get wrong if you aren’t careful. Jon Bentley uses it as the prototypical example of how to develop and debug software in his classic work Programming Pearls. And alpha-beta search is 10x worse, for a number of reasons:

  1. The algorithm has the same sort of hidden subtlety that binary search does. There are many edge conditions (is the invariant that alpha < beta? alpha <= beta? do cutoffs occur when score is > alpha? or >= alpha?) that aren’t consistently described in the classic literature on algorithms.
  2. It operates on (typically) very large search trees. It’s hard to find bugs by examining the search tree because of its raw size, and because many minor problems (such as the comparison issues above) are effectively hidden because they often don’t affect the outcome of searches.
  3. There is lots of information about alpha beta search on the web, but almost all available sources suffer from poorly described invariants, and often don’t implement the necessary complications (such as transposition tables) which further complicate the design.
  4. If you do find code that does work and is non-trivial, you can’t understand it. Game programmers love to “optimize” code, which means that all useful patterns and structure have been abstracted out of the program, using only a helpless hodge podge.

Okay, I’m just bitching. I’ve got 99% of this code right, but the 1% is causing me grief.

Addendum: Oh, and here’s another interesting thing. You might remember from a previous blog posting that I counted the number of possible checkers positions for each number of pieces. If you look at the table, you’ll find that there are about a quarter of a million positions with three pieces or less. My checkers program can compute about twenty five million positions a second, yet I’m not finding the winning search. It’s not too hard to imagine what the problem is. Let’s assume that the branching factor for this problem is roughly eight. If the winning line is 25 ply long, the full search tree is 825 nodes. That’s 3.7E22 positions. Even with alpha-beta searching, that’s a huge number of nodes. This is where transposition tables are supposed to come in. There are many paths that lead toward equivalent positions, so there is no reason to research the same trees over and over again. So, you maintain a table that records the outcomes of previous searches, and if the same position shows up, you already know the answer and don’t have to search.

Obviously, mine isn’t working too well. If it were, I’d expect to quickly expand the search perimeter to find nodes I hadn’t seen before, and locate the winning line.

Bob Hyatt’s Online Publications

I’ve been poking half-heartedly at Milhouse, my checkers program this morning. I found that a particular puzzle (puzzle 34) that I got from a puzzle book seems to behave so oddly that I must believe that there is some underlying bug in my program.

But rather than dwell on that, I found a link to a collection of papers by Bob Hyatt, including one on parallel search, a topic which is much less completely covered in available literature. Bookmarking this for later.
Online Publications

Addendum: I finally added a nice screen dumping capability to my program, so I can leave some nice graphics for positions, rather than the text-only versions I had before. Here’s the troublesome position, with red to move. Chinook sees it as a win. I think 5-1 is the right move. I need to think on this some more.

board

Dusting off Dusting off Milhouse…

A year ago, I blogged that I had dusted off my old checkers program, named Milhouse, and had uncovered a bug in my transposition tables. Today, a year later, I still haven’t debugged it entirely. I tried adding routines to access the Chinook endgame database files to it, but it’s not quite there yet: it seems to go seriously astray in certain positions. I’ve got a few goals in this project which are as yet unrealized:

  • Make the endgame stuff work.
  • Tune the evaluation function.
  • Develop a program to generate a good opening book.
  • Make a nice interface for it, and release it to the universe.

Doubt I’ll get to any of this today.


Addendum:
It can solve some pretty tricky checkers problems. This position seems unobvious to checkers novices like myself. White is to move and win:

LOADED puzzle 101: Rob Pike, Puzzle #28.
Color is set to white.
           White  
        +--------+
        | W - - R|
        |- - r r |
        | - - - w|
        |r r - - |
        | - - R -|
        |- - r r |
        | W - - W|
        |- - r - |
        +--------+
           Red    

On the other hand, puzzles like this one remain elusive. The chinook database reports that this position is a win for red, but it’s pretty difficult. It takes milhouse a 22 ply search to identify the path which leads to a strong advantage. Ironically, I find this position fairly easy to guess the right answer. Red has both white men pinned, and it seems releasing those pins would probably not be a good idea. This leaves the red man at 15 as a possible mover. You could move to 19, and force the white king to give way, and then march down the side to get kinged, but what’s less obvious is that you can then drive the white king to the single corner where it too can be pinned. Of course, I’m a checkers idiot, so what do I know. The winning line appears to be to force an exchange of the checker originally at 15 for the white king, and then capture the white checker originally at 12 which gets pinned in the corner. Then you have a classic 2:1 king endgame.

LOADED puzzle 85: 2nd position, easy, from The Checker Maven.
Color is set to red.
           White  
        +--------+
        | - - - -|
        |W - - - |
        | - - - -|
        |- - - - |
        | - r - w|
        |w R - - |
        | - - R -|
        |- - - - |
        +--------+
           Red    

Chinook Endgame Database working (pretty much)…

Well, in a fit of productivity (or what passes for productivity when you are working on useless pet projects), I decided to get the Chinook 6 piece endgame database working in my checkers program. How hard could it be? They give you code to access it after all. A few simple adaptations, and….

Well, it’s not that simple. The code is dreadful. I mean really bad. First of all, it’s not even ANSI C. I haven’t seen K&R style declarations in a maybe fifteen years, and haven’t written any non-ANSI C in twenty. But protoize does a pretty good job of fixing that up. There are a slew of warnings, but those aren’t really difficult, you just tidy up the code a bit, and it’s clean.

Of course, it still doesn’t work. That’s because (sorrow of sorrows) they wrote the code in a very unportable way. This drives me crazy. They use the “long” datatype all over the place, but as written, I am pretty sure that the code will fail horribly if long is 64 bits. Which, of course, every machine I play with these days are. And there are a few “signed/non-signed” issues, and the fact that lseek doesn’t return a long anymore to confound you a bit. And, of course, the interface routines are all written to accept board positions in a format which only the Chinook guys could love, and it took me way too long to create an adapter to make it work.

But I think it does. At least, it seems to pass some preliminary tests:

milhouse, a checkers program by Mark VandeWettering.
transposition table enabled, 4194304 entries, 96 Mb used
quiescence extension enabled
165 puzzles in the puzzle library
sanity checking sizeof(long) = 8
initializing Chinook endgame database...
... creating table
... using database DB6
... allocating 48132029/47004 entry index
... initializing 2...3...4...5...6... database
... allocating 16384 buffers
... initializing 16384 buffers
milhouse: puzzle 164
LOADED puzzle 164: First Position, white should win, but it takes 92 moves..
Color is set to white.
	   White  
	+--------+
	| - - - -|
	|- - - R |
	| - - - -|
	|- - - - |
	| - - - -|
	|- w - - |
	| - - w -|
	|- r - - |
	+--------+
	   Red    

milhouse: db
Chinook sees this as a win for white
milhouse: 

This is the classic First Position, which it sees as a win for white. Unfortunately, it doesn’t actually play the ending very well. It won’t lose it as white (ever), but because of search horizon issues, it won’t find the winning line either (it’s 92 moves long, so that’s not suprising, but still).

More on this later.

On the futility of expert checkers play through deep searches alone…

I really can’t put off creating an endgame database for my checkers program for much longer. I’ve been studying the so-called “First Position” for a while, and it’s basically hopeless to find the way through to the win for white from this position:

	   White  
	+--------+
	| - - - -|
	|- - - R |
	| - - - -|
	|- - - - |
	| - - - -|
	|- w - - |
	| - - w -|
	|- r - - |
	+--------+
	   Red    

What’s so difficult about this? It’s a very, very tricky position. As Jim Loy says, “The procedure: Force the piece to advance by immobilizing the king. The king and piece cannot both fit in the double corner, so the king is forced out, where his situation is hopeless.” Easier said than done! Try following his analysis all the way through. Milhouse can’t even seem to find the right winning combination from Figure 8, falling into a draw by repetion where it can’t make any headway in self play.

Here’s a link to the “perfect play”. 93 moves. First Position with Optimizations

Addendum: During play against the novice level of Chinook, I ended up on the losing side of a 4 on 3 king endgame. But here’s the thing: Chinook seemed to take a very, very long time in converting the endgame into the more manageable 3 on 2 endgame. It does point out a problem though: the endgame databases contain only whether a given move is a win, loss or draw. In many cases, all possible moves are still wins, but you may not actually make any headway in reaching the endgame. For instance, check out the following position queried from the endgame database:


4 on 3… which way to go?

All available moves for white are wins (not too hard to believe, since white is up a king), but the question remains which move should I make? If you can’t see a “conversion” (an exchange or capture which simplifies the position) in your search depth, then you aren’t guaranteed to make any headway towards winning the game, and you kind of march around, hoping to stumble into the winning line.

You can build a database which stores the optimal move at any given position, but of course that takes more memory. Still, I think the 5 piece subset, and probably the 6 piece “perfect play” endgame databases are pretty reasonable given modern machines. That’s where I’m heading.

More on checkers…

I’ve been thinking about improving the evaluation function that milhouse uses, and have been particularly interested in automatic tuning of these functions. The two obvious contenders as far as methods go are the TDLeaf algorithm and some kind of evolutionary approach. So, as I was reading Chellapilla and Fogel’s paper “Evolving an Expert Checkers Playing Program without Using Human Expertise”, and pondered it a bit. In the paper, they list a game which was listed as “Played Against Human Rated 2173”, which yielded the following position at move number nine:

	   White  
	+--------+
	| - w w w|
	|w w - - |
	| w w - w|
	|- w - - |
	| - r r w|
	|r r r - |
	| - r r -|
	|- r r r |
	+--------+
	   Red    

The human “expert” picked 11-16 here. It’s not a very good move. In fact, it loses a checker almost immediately. The response is 24-20, and the computer gets the better of a bunch of forced exchanges.

Later, at turn 17 Red, our human is faced with this position:

	   White  
	+--------+
	| - - - -|
	|- w w w |
	| w w - w|
	|r - - - |
	| - r r -|
	|- w r w |
	| - - - r|
	|- r r - |
	+--------+
	   Red    

and selects 3-7, which is another dog. The human in this case blundered into a pair of pretty obvious exchanges, which can be found with even really relatively shallow searches and even the most modest evaluation functions. In other words, this position doesn’t actually illustrate any deep understanding of the positions involved. I’ve heard some pretty stiff criticism of this before, but this is the first time I actually worked through the position.

Milhouse v. Chinook

Well, this morning as I woke up, I decided to try to play milhouse against Chinook. Deluded by feelings of competency, I felt like it might be able to compete (at least at the novice level settings) against Chinook.

Well, it didn’t go as badly as it might. With 15 ply search, milhouse decisively dispatched Chinook at the novice level! Woohoo! Flush with confidence, I decided to play it at intermediate level, and.. well, milhouse obviously played into a serious mistake. It blissfully saw a nearly even position for a couple of turns, while Chinook was taunting me with “Chinook is winning!”. We ended up in a very oddly cramped position, which it doesn’t surprise me that milhouse can’t play very well, because milhouse doesn’t really have an evaluation function which deals with mobility and cramps at all. Chinook polished me off with a 3 for 2 trade.

I would post the pdn files for the games, but my program doesn’t automatically write them, which is an excellent feature that should be added. I need to do some work to transcribe them. Perhaps early in the week.

Loy’s Checkers Problems

I was looking for some more difficult checkers positions to test my checker program milhouse against. Jim Loy’s excellent page has a list of difficult ones. Unlike the typical puzzles I have in my Encylopedia of Checkers Puzzles by Pike, these aren’t the kind of puzzles you typically find in 10 ply. In particular, his puzzle #2 was marked as “difficult”, and was a “cook” of a position thought to be a draw. Interestingly enough, my checkers program, as feeble as its evaluation function is, seems to find the winning line for Red with a 15 ply search.

Loy’s Checkers Problems

Puzzle Jim Loy, Puzzle #2 loaded.
Color is set to red.
	   White  
	+--------+
	| - w - -|
	|w - - w |
	| w r - w|
	|w w - w |
	| w r - w|
	|r - r - |
	| - r - r|
	|- r r r |
	+--------+
	   Red    

milhouse: depth 15
search depth is now 15
milhouse: annotate
	1-6 : -359
	2-6 : -61
	3-8 : -425
	5-9 : -111
	7-11 : -180
	10-14 : -239
	15-18 : -119
	23-26 : 124
	23-27 : -40
milhouse: playout
     1. 23-26 [  124] 31x22 [ -119]
     2.   2-6 [  182] 16-11 [ -181]
     3.  7x23 [  188] 24-19 [ -187]
     4. 15x24 [  238] 28x19 [ -237]
     5. 10-14 [  239] 17x10 [ -239]
     6.  6x24 [  242] 22-18 [ -241]
     7. 24-27 [  244]  13-9 [ -235]
     8.  5x14 [  261]  18x9 [ -233]
     9. 27-31 [  237] 25-22 [ -202]
    10. 31-26 [  251] 22-18 [ -251]
	   White  
	+--------+
	| - - - -|
	|- - R - |
	| - r - w|
	|w - w - |
	| - - - -|
	|r - - w |
	| - - - -|
	|- r - r |
	+--------+
	   Red    

milhouse: playout
     1. 26-22 [  251] 18-15 [ -249]
     2. 23-27 [  271] 15-11 [ -312]
     3. 22-18 [  313] 21-17 [ -264]
     4. 18-15 [  267]  11-8 [ -267]
     5. 27-31 [  312]   8-4 [ -269]
     6. 15-11 [  335] 17-13 [ -341]
     7. 31-27 [  342]   9-6 [ -342]
     8.  1x10 [  346]  13-9 [ -346]
     9. 12-16 [  346]   9-6 [ -346]
    10. 27-24 [  352]   6-1 [ -352]
	   White  
	+--------+
	| - - - -|
	|- - - - |
	| R - - -|
	|w - - - |
	| r - - -|
	|- R r - |
	| - - - -|
	|W r - W |
	+--------+
	   Red    

milhouse: playout
     1. 16-19 [  352]   1-5 [ -354]
     2. 19-23 [  377]   5-1 [ -375]
     3. 23-26 [  393]   1-5 [ -400]
     4. 26-30 [  401]   5-9 [ -450]
     5. 24-19 [  401]   9-5 [ -451]
     6. 30-25 [  521]   5-1 [ -656]
     7. 10-14 [  656] 20-16 [-9986]
     8. 11x20 [ 9987]   4-8 [-9988]
     9.  3x12 [ 9989]   1-6 [-9990]
    10. 19-15 [ 9991]   6-9 [-9992]
	   White  
	+--------+
	| - - - -|
	|- - - R |
	| - - - -|
	|R - - - |
	| - R r -|
	|r - - W |
	| - - - -|
	|- - - - |
	+--------+
	   Red    

milhouse: playout
     1. 15-10 [ 9993]  9x18 [-9994]
     2. 20-24 [ 9995] 18-23 [-9996]
     3. 24-19 [ 9997] 23x16 [-9998]
     4. 12x19 [ 9999]	   White  
	+--------+
	| - - - -|
	|- - - R |
	| - - - -|
	|- r - - |
	| - - - -|
	|- - R - |
	| - - - -|
	|- - - - |
	+--------+
	   Red    

milhouse: 

Total number of checkers positions…

I might actually get to constructing an endgame database for checkers sometime soon. To start, I decided to try to write code to reproduce the table that appears in Schaeffer’s paper on solving checkers. It basically is just a five nested loop that computes a big sum of some fairly obvious combinatorial terms. So, I coded it up in python, and ran it. It produced the numbers I was expecting, but slowly. Really slowly. Unacceptably slowly. Like well over an hour before I killed it.

Part of the code computes the binomial coefficients. I put in some code to memoize those numbers (don’t recalculate, just look them up in a dictionary if you’ve computed them before). Bam! 2 seconds later, and I had the following table.

 N     possible positions    
------------------------------
 1                         120
 2                       6,972
 3                     261,224
 4                   7,092,774
 5                 148,688,232
 6               2,503,611,964
 7              34,779,531,480
 8             406,309,208,481
 9           4,048,627,642,976
10          34,778,882,769,216
11         259,669,578,902,016
12       1,695,618,078,654,976
13       9,726,900,031,328,256
14      49,134,911,067,979,776
15     218,511,510,918,189,056
16     852,888,183,557,922,816
17   2,905,162,728,973,680,640
18   8,568,043,414,939,516,928
19  21,661,954,506,100,113,408
20  46,352,957,062,510,379,008
21  82,459,728,874,435,248,128
22 118,435,747,136,817,856,512
23 129,406,908,049,181,900,800
24  90,072,726,844,888,186,880
------------------------------
   500,995,484,682,338,672,639