Category Archives: Checkers

Another checker problem…

I was waiting for sleep to come, and surfed over to the American Checker Federation website. As long-time readers of this blog might remember, I’ve been tinkering a checkers program together, which I tentatively named “Milhouse” to play checkers. This week’s problem challenge was a classic 2 on 3 battle where White is to move and force a draw from the down position:

board

It’s late, and been a full day, so I can’t really say that I understand this position. But Milhouse doesn’t get tired: when I load the endgame database, it immediately proclaim that the position is a draw. What’s more is that it identifies two different lines: their website suggests 23-26 as a drawing move, but 19-15 is also a draw.

Without the endgame database, White thinks it is down a man, but with a 25 ply search can’t find the draw. In addition to the two moves listed above, it thinks that 23-18 is also almost as viable. But 23-18 is a dead loss, presumably because it allows Red to king both of its checkers. Even with the database though, Milhouse can’t seem to find the win for Red in a 31 ply deep search. This seems like a good test case for future improvements to playing with the database.

Milhouse takes Quiz #10 – Double Cross

Checker expert Jim Loy has a number of quizzes on his website, including the following one that I found as part of my earlier post on the Double Cross opening:

Quiz #10 – Double Cross.

Here are the moves that milhouse chose with a hard time limit of 30 seconds per move, along with the resulting points awarded…

5-9 (+5),16-20 (+5), 8-11 (+5), 11-16 (+5), 5-9 (+5), 5×14 (+3), 7-11 (+2), 6-9 (+5), 7-10 (+10), 1-6 (+10), 9-13 (+10), 3-7 (+15), 13-17 (+10), 6×13 (+5)

The moves marked in bold are moves which differed from Jim’s main line.   Milhouse scores an impressive 95 out of a hundred.  The only move which causes it to not score a perfect 100 is the position where Milhouse chose to play 7-11 instead of the preferred 4-8 line.

Let’s look at the position…


Red to move, Jim suggested that 4-8 is the right move, Milhouse played 7-11.   Who is right?

Red to move, Jim suggested that 4-8 is the right move, Milhouse played 7-11. Who is right?


I’ve just begun to toy around with analyzing this, but both Milhouse and Cake seem to not like the response that is given in the line. Milhouse isn’t smart enough to see the line as won when it tried 7-11, but after 4-8 22-18, it sees itself as up a full man in only a 17 ply search.

I’ve sent an email to Jim Loy for more expert analysis. We’ll see what he has to say about the matter.

But still, 95/100 is pretty good. 🙂

Dusting off my checkers program Milhouse…

I noticed that Martin Fierz released a new version of his Checkerboard program, so I thought I’d set it sparring against my own program, milhouse. The Cake engine it ships with walks all over my program, but it managed this win against Simple Checkers.

[Event ""]
[Date ""]
[Black "Milhouse"]
[White "Simple Checkers"]
[Result "1-0"]
1. 9-14 23-18 2. 14x23 27x18 3. 5-9 32-27 4. 12-16 27-23 5. 16-20 23-19 6. 20x27 31x24 7.
10-14 26-23 8. 7-10 24-20 9. 8-12 28-24 10. 1-5 22-17 11. 9-13 18x9 12. 5x14 25-22 13. 6-9
23-18 14. 14x23 22-18 15. 13x22 21-17 16. 23-26 30x23 17. 3-8 17-14 18. 10x17 19-15 19.
2-7 23-19 20. 17-21 18-14 21. 9x18 15-10 22. 7x14 19-15 23. 22-25 29x22 24. 18x25 15-10
25. 25-30 10-6 26. 14-18 6-1 27. 18-23 1-6 28. 23-27 6-10 29. 27-32 10-14 30. 12-16 14-18
31. 32-28 18-14 32. 28x19 14-10 33. 30-26 10-7 34. 19-15 7-10 35. 15x6 *

Addendum: I did some quick analysis using both Cake and Milhouse this morning, trying to locate the bad moves that each engine made. I haven’t stared at it too deeply, but it appeared that Simple Checkers walked into a poor opening which I suspect that if I looked hard, I could find in one of my checkers references. Cake finds the first 5 moves in its book, but then Simple Checkers plays 5. … 23-19, which is out of book. Still, upon a 31 ply search, Cake scores the position at -126, which is over a full man down, which would be tough to overcome.


 White to move, but is already behind. Simple Checkers has walked into a poor opening.   Cake and Milhouse both agree that this position is weak, and White is already almost a full man down.

White to move, but is already behind. Simple Checkers has walked into a poor opening. Cake and Milhouse both agree that this position is weak, and White is already almost a full man down.


Addendum: As I suspected, the opening is a known weak one for White, and it begins right back at the second move of the game, although the opening is still part of the standard 3 move opening set, so some possibilities remain. The opening is called the Double Cross, and is considered weak for White (in particular, 1. … 23-18 is not considered a proper response.) Jim Loy has a nifty page full of openings with some insightful commentary, and shows a Tinsley/Fortman game where this opening was featured, with an interesting cook played by Tinsley.

Addendum: Jim Loy has some interesting analysis of the Double Cross opening here.

Milhouse needs an infusion of actual checkers knowledge

This morning, I woke up and was trying to resolve a problem: why is Milhouse (my checkers program) so slow? The answer didn’t seem entirely obvious, so I pondered it some more. I refined it to “why is Milhouse so slow when it accesses the endgame database files?” I think I know why, but unfortunately, it will require some additional thought and work to remedy.

To explain, let me recap what happens. Put in simple terms, Milhouse does an alpha-beta search to some depth. As it is traversing in depth first order, it may realize that certain paths are futile (it can prove that no leaf in the subtree will be the right move regardless of what value it has, even without searching). Eventually, you’ll get down to a leaf node, and you’ll “evaluate” it (run a simple function which gives a figure of merit). These values get “backed up” through the tree and one of them ultimately wins as the value of the search.

In other words, every time you search, the score that you get is the evaluation of some node on the search frontier.

Let’s imagine for a moment that you have a simple evaluation function. Say you count men (unpromoted checkers) as 100 points and kings as 130 points, and the evaluation is the sum of all your scores minus the sum of your opponents scores. That’s certainly a reasonable, but relatively un-knowledgeable evaluation function, but it would “work”. The problem is this: it doesn’t differentiate very finely. Many, many positions on the search horizon will have identical scores. This significantly reduces the efficiency of search, because the “backed up values” can’t do their job efficiently: there are many subtrees with nodes on the search frontier that look just as good as the nodes you’ve already looked at. Hence, you end up searching and ultimately evaluating many more nodes than you could if you took more information into account.

Ultimately then, “knowledge is power”. Improving the evaluation function to include more information has an effect on overall search efficiency. The more knowledge it has in being able to differentiate between the merits of two positions, the fewer overall nodes it will have to evaluate in search (the fewer subtrees it will have to resolve all the way to the leaves).

My evaluation function doesn’t understand anything about runaway checkers or trapped kings, and therefore it has to use deeper searches to resolve situations that include these features. When I’ve played against Cake, I’ve noticed that not only does Cake search deeper faster, but Cake seems to recognize that it’s got me in trouble in searches of much shallower depth (sometimes as many as five moves earlier).

So, I’ve got to help Milhouse learn more about checkers. I can do this two ways: one, by learning more about checkers myself, and putting some of that knowledge into Milhouse. The second, by learning more about machine learning, and then allowing Milhouse to learn more itself, either through self-play or training.

I suspect that machine learning might be more productive, but I’m not entirely convinced. I’ll have to give it a bit more thought.

Addendum: Okay, that was highly abstract, let’s work through an example. I’ve been trying to absorb some knowledge from Checkers legend Derek Oldbury, whose Move Over is available online. Here’s Diagram 17, a fairly simple position:

From Derek Oldbury, Diagram 17.  White to move, and is lost...

From Derek Oldbury, Diagram 17. White to move, and is lost...

If it were Black’s turn to move, it would be a draw (both men promote, and a boring 2×2 King draw ensues) but if White is to move, clearly, he can’t move his king (backing away pins him against the wall, advancing is suicide), so he must move his man. Putting him against the wall is hopeless, the black man takes the opposing position and waits to jump him. Therefore, the line goes 24-19 8-11 19-15 18-22 15×8 22×13. There is an exchange of material, but while white will king his man, he’s still doomed, as he can’t escape to the safety of the double corner. Here’s how the search looks using milhouse without its endgame database:

milhouse: restore diagram.17
board restore from diagram.17
	   White  
	+--------+
	| - - - -|
	|- - - - |
	| w - - -|
	|- - R W |
	| - - - -|
	|- - - - |
	| r - - -|
	|- - - - |
	+--------+
	   Red    
... White to move
... FEN: W:WK17,24:B8,K18

milhouse: id
      +1 : [-11] 0.00s
         : 24-19
      +3 : [-15] 0.00s
         : 24-19  8-12 17-13
      +5 : [-12] 0.00s
         : 24-19  8-11 19-15 11-16 15-10
      +7 : [-19] 0.00s
         : 24-19 18-23 19-16 23-18 16-12  8-11 12-8 
      +9 : [-16] 0.01s
         : 24-19  8-12 17-21 18-22 19-15 12-16 15-10 16-19 10-6 
     +11 : [-12] 0.02s
         : 24-19  8-11 19-15 18-22 15x8  22x13  8-3  13-9   3-8 
         : 9-14  8-11
     +13 : [-14] 0.03s
         : 24-19  8-11 19-15 18-22 15x8  22x13  8-3  13-9   3-8 
         : 9-14  8-11 14-18 11-16
     +15 : [-17] 0.10s
         : 24-19  8-11 19-15 18-22 15x8  22x13  8-3  13-9   3-8 
         : 9-14  8-11 14-18 11-16 18-15 16-12
... researching after failing low (-42)
     +17 : [-9983] 0.30s
         : 24-19  8-11 19-15 18-22 15x8  22x13  8-3  13-9   3-8 
         : 9-6   8-11  6-10 11-16 10-15 16-12 15-11 12-16
final score = -9983
0.327 seconds elapsed.
... 231735 nodes searched, 65777 nodes evaluated
... 30493 alpha nodes, 5483 pv nodes, 65965 beta cutoffs
... avg branching factor 1.12
... 11096 nodes ordered
... 19060 pv hits, 16916 pv misses (52.98%)
transposition tables
... 166190 gets (85501 tier0 / 1072 tier1 / 79617 failed)
... 101941 puts (85701 tier0 / 16240 tier1) (30493 alpha, 5483, 65965 beta)
... 0 chinook database accesses.

The output bears some explanation. it uses iterative deepening to search, adding two ply to each search as it deepens. So, it starts out with one ply depth, then three, then five, and so on, up until it discovers a winning line with a 17 ply search (it’s actually a 19 ply winning line, quiescence search makes it resolve the final jump that wins the game). You’ll notice that at each search depth, it believes 24-19 to be the best move (it is) but that right up until the realization that the game is lost, (a 15 ply search) it believed the game to be essentially even. Had I encountered the starting position at the leaf node of a search, it might have believed it could salvage a draw from this positon, and selected it over a position which actually provided a draw.

Now, let’s look at the same search, but using the endgame database.

board restore from diagram.17
	   White  
	+--------+
	| - - - -|
	|- - - - |
	| w - - -|
	|- - R W |
	| - - - -|
	|- - - - |
	| r - - -|
	|- - - - |
	+--------+
	   Red    
... White to move
... FEN: W:WK17,24:B8,K18

(DB W LOSES) milhouse: depth 99
search depth is 99
(DB W LOSES) milhouse: id
      +1 : [-7614] 0.00s
         : 17-13
      +3 : [-7614] 0.00s
         : 17-13 18-14 24-19
... researching after failing low (-7712)
      +5 : [-7712] 0.01s
         : 17-13 18-14 13-9  14x5  24-19
... researching after failing low (-7808)
      +7 : [-7808] 0.01s
         : 24-19  8-11 19-15 18-22 17x26 11x18 26-30
      +9 : [-7810] 0.01s
         : 24-19  8-11 19-15 18-22 15x8  22x13  8-3  13-9   3-7 
     +11 : [-7812] 0.02s
         : 24-19  8-11 19-15 18-22 15x8  22x13  8-3  13-9   3-7 
         : 9-14  7-2 
     +13 : [-7812] 0.02s
         : 24-19  8-11 19-15 18-22 15x8  22x13  8-3  13-9   3-7 
         : 9-14  7-3  14-10  3-8 
     +15 : [-7812] 0.03s
         : 24-19  8-11 19-15 18-22 15x8  22x13  8-3  13-9   3-7 
         : 9-14  7-3  14-17  3-7  17-14  7-3 
... researching after failing low (-9983)


     +17 : [-9983] 0.04s
         : 24-19  8-11 19-15 18-22 15x8  22x13  8-3  13-9   3-7 
         : 9-14  7-3  14-10  3-8  10-15  8-12 15-11 12-16
final score = -9983
0.075 seconds elapsed.
... 27737 nodes searched, 4147 nodes evaluated
... 4573 alpha nodes, 124 pv nodes, 9520 beta cutoffs
... avg branching factor 1.09
... 55 nodes ordered
... 3388 pv hits, 1309 pv misses (72.13%)
transposition tables
... 23599 gets (9081 tier0 / 15 tier1 / 14503 failed)
... 14217 puts (14125 tier0 / 92 tier1) (4573 alpha, 124, 9520 beta)
... 3852 chinook database accesses.
(DB W LOSES) milhouse: 

In spite of the fact that it loaded 3852 positions from the database cache, it actually ran 7x faster. You can see that it searched and evaluated many fewer nodes. The highly negative scores even at the initial search depth of one means that it knows this position is lost immediately, so it effectively gives you the benefit of an additional 17 ply of search, should this position show up as a leaf in the search tree. What is needed is to create an evaluation function which more closely predicts the value of deeper searches without the expense.

Addendum2:

Here’s an example position that causes difficulty without the endgame database:

What should the score be, with red to move?

What should the score be, with red to move?

Without the endgame databse, milhouse scores this position as +6 for red, but of course, it’s actually won (it’ll take about 7 more ply of search depth to discover). If you look at how it scores the possible moves, you get:

analyzing board position...
	    -6 :  6-1  11-15 ...
	    -4 :  6-2  11-15 ...
	    +5 :  6-9  11-15 ...
	   +12 :  6-10 11-8  ...

You’ll see it selects the right move, but the reality is that the first three moves are all draws, where the last move allows the win.

Flaws in Milhouse’s Play

While debugging some more challenging positions from The Checker Maven, I encountered the following position which seemed to be giving Milhouse fits.

White to moved, and should be doomed, but isn't against Milhouse.

White to move, and should be doomed, but isn't against Milhouse.

In debugging the preferred line, it seems to be stuck in a short repeating cycle which it views to be best. In reality, the repeated positions should be viewed as draws, which should be enough for them to reject the line, since the database knows that the line is a win. But my program does nothing to detect repeated positions (I briefly flirted with it, but didn’t like the implementation, so I discarded it.) I’ll have to figure this out to solve positions like this.

Gould Puzzle #516

I started working through my analysis of Gould Puzzle #516, which my previous scan using the Cake 8 piece database, and revealed to be a loss for white, instead of the draw that Gould listed. Here’s the position:

Gould Position #516

Gould Position #516

The database reports this position as lost, but Gould thinks it is a draw. Let’s see where the published play goes wrong. Gould thinks that the play should proceed 6-2, 26-23, 19-16, 12-19, 2-7, 11-15, and 1-6, leading to the following:

Gould suggests that red should make 23-27, but that move tosses away win.   14-18 enables you to retain the extra man.

Gould suggests that red should make 23-27, but that move tosses away win. 14-18 enables you to retain the extra man.

Gould listed play as proceeding with 23-27 (which is a drawing move), 6-10, 14-18, 20-16, 18-22, and 10-14 leading to a position where each side has three kings. But Red can maintain his one piece lead by playing 14-18 instead. You can walk a narrow line which is a pretty nice little ballet to advance all the red men, leading to a 4 on 3 endgame, which is tricky, lengthy, but very winnable.

Addendum: There is a similar flaw in the other variation presented.

Gould’s The Game of Draughts is on Google Books, and archive.org

I was goofing around with my program, and had loaded a problem that was supposedly a win for Red, and set it searching. After searching to thirty or so ply, Milhouse was still convinced the position was drawn. I then tried examining it in Cake to 10 ply deeper, and it was still looking like a draw. I wondered what the analysis of the position was, so I tried to see if I could find a copy of Gould’s book online. I found it first on archive.org, but the quality of the scan is pretty uneven: some of the pages are misaligned and cutoff. But luckily, I found that Google Books had digitized it. For some odd reason, downloading the PDF seems to result in a file which has all the diagrams stripped out, but if you page through it using their online viewer, it works just fine. The PDF might include some elements which require real Adobe Acrobat to render properly, but the overall quality is quite good. Here’s the particular puzzle I was looking at:

Text not available
The game of draughts. Problems, critical positions, and games By Joseph Gould

Here’s a link to the page that contains the analysis. I’ll try to present my “refutation” after I work through it a bit more.

Using Cake’s Endgame Database…

I’ve been having a bit of difficulty with the Chinook endgame database, so I thought that since Martin Fierz was kind enough to release his endgame database as well as the code for accessing it, I thought I’d give it a try by making the necessary adapters to make it work for Milhouse. Martin’s code is pretty good, but included a few extra Windows dependences that I didn’t like, so it took me about an hour to get it to the point where it compiles, links, and initializes the database. We’ll see how it works. Eventually, I’ll replace it with my own implementation, probably including a DTC database when I do so, but it’s obvious to me that I still have some problems with the core program, so this will serve as an interim measure. The bonus from this is that I also end up with 8 piece databases, instead of just 6 pieces.

Using the Chinook WLD database…

So, for the past couple of days, I’ve been working on fixing the WLD database lookup code in Milhouse, in an attempt to get it to handle endgame positions properly. There is a tiny bit of subtlety which I haven’t seen explained anywhere else, so I think I’ll take a moment here to explain it.

The Chinook database that I am using contains WLD for all positions with six pieces or less. But that’s not actually true: it actually doesn’t include information for any positions which are captures. Why? To shrink the size of the database. Roughly half of the positions (if I recall) are capture positions. Chinook works better if we look those up in the smaller database that results after the jump, rather than here. That’s all very well and good. We could just do a quiescence sort of search: jump out your own captures, and see what the resulting positions are.

But there is a further wrinkle: Chinook deletes not only positions where not only the moving side has no captures, but the other side has none either. Again, this deletes about half the remaining positions from the database, so it’s good, but now you really need to be somewhat clever.

Basically, you end up writing a little minimax search engine, but instead of running the evaluation function at the leaves, you do database lookups. You have to keep recursing down until both the moving side and the non moving side have no captures, then you can do the lookup. If any of your children are a loss for the other side, then you return a win. If all of your children are wins for the other side, then you’ve loss. Otherwise, you are drawn. (Implement that logic carefully, I botched it once, which led to an evening of debugging.) The algorithm always terminates because one or both sides is removing at least once checker for each pair of plies.

And, it appears to be working!

Frankly, using a WLD for only six pieces seems kind of odd. If we stored a “distance-to-conversion” or perfect play database, we’d take up more space, but it woud be trivial to get the program to play out difficult endgames, effectively adding dozens of ply to the search engine with relatively little complication. I think that’s next on my list. I’ve begun trying to decipher Appendix B in Schaeffer’s “Solving Large Retrograde Problems” paper, but the calculation of MaxWP is confusing, to say the least. I’ll have to work through it myself.

Addendum: Actually, it’s interesting to note that three quarters of the entire database are locations which are captures. It seems like that can’t possibly be true in actual play (actual captures are quite rare in end games). Is this an exploitable feature? Martin Fierz hints that it might be in his “Stepping Stone” project suggestion. I’ll have to think about it some more.

Addendum2: Sigh. There is an infinite loop condition which appears not to be caught. Not sure what the deal is with that yet. I’ll have to think about it more. I think it is because you don’t make any headway if the opposing side has a capture, so you play a move which removes the capture, then he plays a move which restores the capture, and so on… Not sure how to catch that. If we play two non-capture moves in a row, does that mean that we are drawn? Perchance.

Milhouse misses a win…

White to move and win...

White to move and win...

I was playing a sparring match between Milhouse and Cake, and Milhouse arrived at the position on the right as White with White to move. Inexplicably, it played 27-23, squandering away the win by avoiding the obvious 2 for one shot into the forced win by playing 19-16 and forcing the exchange.

Oddly, when I extracted this position from the log and replayed it, I got the right move:

milhouse: id
         1:  [-3]
... researching after failing high.
         3: 19-16 12x19 [137]
         5: 19-16 12x19 23x7  26-31 [134]
         7: 19-16 12x19 23x7   6-10  7-2  26-31 [137]
         9: 19-16 12x19 23x7   6-9   7-2  26-31  2-6  31-27 [137]
        11: 19-16 12x19 23x7   6-9   7-2  26-31  2-6  31-27 20-16 27-32 [132]
        13: 19-16 12x19 23x7   6-10  7-2  26-31  2-6  10-15  6-10 15-19 10-15 19-23 ... [144]
        15: 19-16 12x19 23x7   6-10  7-2  26-31 20-16 31-27  2-6  10-15 16-11 27-32 11-8  [145]
        17: 19-16 12x19 23x7   6-10 20-16 26-31  7-2  10-15 16-11 15-19  2-6  19-23 11-7  [129]
        19: 19-16 12x19 23x7   6-10 20-16 26-31 25-22 10-15  7-2  31-27 16-11 15-19  2-6  [128]
        21: 19-16 12x19 23x7   6-10 20-16 10-15  7-2  15-19 16-11 26-31 25-22 31-27 21-17 [128]
... researching after failing high.
        23: 19-16 12x19 23x7   6-10 25-22 10-15  7-2  26-30  2-7  15-19  7-11 19-23 11-15 [7420]
        25: 19-16 12x19 23x7   6-10 25-22 10-15  7-2  15-19  2-7  26-30  7-11 19-23 11-15 [7428]
... researching after failing high.
        27: 19-16 12x19 23x7   6-10 25-22 10-15  7-2  15-19  2-7  26-30  7-11 19-24 11-15 [7512]
        29: 19-16 12x19 23x7   6-9  25-22  9-14 20-16 26-31  7-2  31-27 16-11 27-32  2-6  [7516]
final score = 7516
175.253 seconds elapsed.
... 47783648 nodes searched, 24123999 nodes evaluated
... avg branching factor 1.80
... 388604 nodes ordered
... 3684106 pv hits, 2544492 pv misses (59.15%)
transposition tables
... 23596225 gets (2540472 tier0 / 6252127 tier1 / 14803626 failed)
... 9901700 puts (1812362 tier0 / 8089338 tier1) (2856247 alpha, 139002, 6906451 beta)
... 20062999 chinook database accesses.

I don’t know what went wrong during the game. Very odd.

Addendum: Okay, I’m confused. Perhaps I’m reading the analysis Cake did of the game incorrectly, because 27-23 wasn’t a valid choice here. I’ll try to decipher what’s up later.

Additional progress has been made on milhouse though: thanks to some thoughtful hints from Ed Trice (thanks Ed) I think I finally resolved the problem with Chinook DB access once we hit the endgame. I have added a quiescence search to resolve capture positions properly, and it seems to past nearly all of the tests it used to fail. I say nearly all, because I have discovered a single test which still fails, shifting back from a loss to a draw, and back to a loss as iterative deepening occurs. Not sure what that’s about yet.

Some new, and relatively challenging checker puzzles

I was surfing over at Bob Newell’s “Checker Maven” website, and found this awesme collection of PDN files for download. I have a table of challenging positions compiled into milhouse, which I use for debugging and testing, so I decided to hack a simple parser to input some of the positions into Milhouse. The “Gem Problems” file contain 162 positions, some all of which are quite challenging. Consider this one:

Red to move and win.

Red to move and win.

It takes around a 55 ply search to find the winning line, but a reasonably skilled checker player would be expected to play this line out without mistakes. Milhouse still finds some positions like this challenging, since it typically maxes out around 40 ply or so before grinding to a halt. I suspect this reveals a deep flaw in my move ordering, and I think it might be difficult to fix it. But I’ll think about it some more.

Milhouse wins against Cake!

Okay, before I get too excited, I’ll disclose that Cake was set to a time limit of around 1 second, which limited it to just a few ply (maybe 9 typically) and I was letting MIlhouse think a little harder (still taking less than 10 seconds typically). Still, it’s good: it means that sparring matches between milhouse and cake can be tuned to make them challenging. I need to figure out how I can build milhouse as an engine for CheckerBoard so this can be automated.

Perhaps I could use mingw to build an appropriate dll.

[Event "Morning Match"]
[Date "2009-05-04"]
[Black "Cake, 1sec per move"]
[White "Milhouse, 19 ply search"]
[Result "0-1"]
1. 9-13 23-19 2. 6-9 27-23 3. 11-15 23-18 4. 8-11 26-23 5. 4-8 30-26 6. 9-14 18x9 7. 5x14 
22-18 8. 15x22 25x9 9. 11-16 9-6 10. 2x9 24-20 11. 8-11 26-22 12. 10-15 19x10 13. 7x14 
29-25 14. 16-19 23x7 15. 3x10 28-24 16. 10-15 32-28 17. 14-18 24-19 18. 15x24 22x15 19. 
9-14 28x19 20. 14-18 15-10 21. 12-16 19x12 22. 13-17 21x14 23. 18-22 25x18 24. 1-5 31-27 
25. 5-9 14x5 0-1

KCheckers isn’t very good at all, but Cake is no cake walk….

Milhouse utterly destroys kcheckers. This game is typical of the five I just played against it.

[Event "KCheckers Game"]
[Date "2009.04.29"]
[Round "1"]
[Black "*Master*"]
[White "Milhouse"]
[Result "0-1"]
[GameType "21"]
1. 12-16 22-18 2. 8-12 24-19 3. 10-15 19x10 4. 7x14 25-22 
5. 9-13 18x9 6. 5x14 23-19 7. 16x23 27x9 8. 12-16 26-23 
9. 6-10 22-18 10. 11-15 18x11 11. 16-20 9-5 12. 1-6 5-1 
13. 3-7 23-18 14. 7x16 18-14 15. 10x17 1x10 16. 17-22 31-27 
17. 16-19 27-24 18. 20x27 32x16 19. 22-26 30x23 
20. 4-8 28-24 21. 13-17 21x14 22. 8-11 16x7 23. 2x11 10-7 
24. 11-16 24-20 25. 16-19 23x16 0-1

On the other hand, I got Martin Fierz’s Checkerboard program running under wine on my Macbook, and it’s “Cake” engine made pretty short work of my program.

[Event "Sparring"]
[Date "2009-04-29"]
[Black "Milhouse"]
[White "Cake"]
[Result "0-1"]
1. 11-15 24-19 2. 15x24 28x19 3. 8-11 22-18 4. 10-14 27-24 5. 7-10 24-20 6. 3-8 19-16 7. 
12x19 23x7 8. 2x11 26-23 9. 10-15 25-22 10. 8-12 31-27 11. 6-10 30-25 12. 1-6 22-17 13. 
15x22 25x18 14. 11-15 18x11 15. 9-13 11-7 16. 13x22 7-2 17. 5-9 2-7 18. 4-8 21-17 19. 14x21 
7x5 20. 21-25 5-1 21. 6-9 1-6 22. 9-13 6-10 23. 25-30 23-19 24. 8-11 27-24 25. 22-26 32-28 
26. 11-15 10-14 27. 13-17 19x10 28. 17-22 20-16 29. 12x19 24x15 30. 26-31 14-18 31. 22-26 
18-23 32. 30-25 23x21 0-1

Milhouse needs an opening book…

I decided to play a game against Chinook set on its intermediate level. It was lost before it even began. Fortman’s Basic Checkers lists 3. 16-19 … as the only viable move for Black, and the convincing way that Chinook took me apart is a pretty good ilustration that even with an 18 ply search, Milhouse was unable to extricate itself from that move. I think I need an opening book.

[Event "Sparring Match"]
[Date "2009-04-27"]
[Black "Milhouse"]
[White "Chinook (Intermediate)"]
[Site "Internet"]
[Result "0-1"]
1. 10-14 22-18 2. 12-16 24-20 3. 7-10 28-24
4. 8-12 24-19 5. 3-7 25-22 6. 9-13 18x9
7. 5x14 22-17 8. 13x22 26x17 9. 1-5 29-25
10. 14-18 23x14 11. 16x23 27x18 12. 10-15 25-22
13. 15-19 17-13 14. 11-15 18x11 15. 7x16 20x11
16. 6-9 13x6 17. 2x25 21-17 18. 25-29 17-14
19. 12-16 11-7 20. 16-20 7-3 21. 19-23 3-7
22. 20-24 7-10 23. 5-9 14x5 24. 4-8 10-15
25. 23-27 32x23 26. 24-28 15-11 0-1
After 3. 7-10... White to move and win...

After 3. 7-10... White to move and win...

Addendum: With a 20 ply search, you can see that Milhouse thinks that 6-10, 7-10, and 8-12 are all more viable. Broken.

            -9 :  6-10 25-22  8-12 28-24 10-15 22-17 15x22 17x10  7x14 26x10  2-7  10-6   1x10 23-18
            -9 :  7-10 27-24  8-12 24-19 10-15 19x10  6x22 25x18  3-7  28-24  7-10 30-25  4-8  24-19
            -9 :  8-12 28-24  7-10 24-19 10-15 19x10  6x22 25x18  3-7  27-24  7-10 30-25  4-8  24-19
          -109 :  9-13 18x9   5x14 23-18 14x23 26x12  7-10 27-23  6-9  25-22  2-6  30-25 10-15 21-17
          -111 : 11-15 18x11  8x15 20x11  7x16 23-18 15x22 25x18 14x23 26x12  2-7  27-23  6-10 23-18
          -116 : 14-17 21x14 11-15 18x11  8x15 20x11  7x16 23-19  9x18 19x10  6x15 26-23  2-6  23x14
           -97 : 16-19 23x16 14x23 26x19  6-10 25-22 11-15

Well, not really broken. Playing 3. 16-19 requires Black to play a man down for the duration of any reasonable search horizon. Mihouse will have difficulty playing into man down openings because of its relatively weak evaluation function. It will absolutely require an opening book to survive openings like this one.

Addendum: Things to do: try to see if Milhouse does better if the roles were reversed in this position. I also have been pondering coding up some form of opening book generator based upon Lincke’s “drop out expansion” idea. It’s actually pretty straightforward, and I could leverage a couple of multicore desktop machines to probably create a basic opening book in a month or two of runtime. Or, I could just try to make use of Martin Fierz’s book in Cake. But that would be cheating. Sort of. 🙂