Monthly Archives: March 2010

A small experiment with a C compiler

I’m still somewhat baffled by the performance of my checkers program. I keep pondering that perhaps it would be a good idea to take all the lessons I’ve learned, burn it down, and try again with Milhouse2. After all, in the words of Tom Duff (who may not have said this, but should have) “any program worth writing is worth rewriting”. Toward that end, I was reconsidering the data representation that I used, and revisited this nifty page on Bitboards:

Checker Bitboard Tutorial.

I basically use bitboards already, but at the very end of the page, you’ll see mention of a novel if somewhat unobvious representation credited to Thotsaphon Thanatipanonda, which simplifies move and jump generation. My own code for this was laboriously constructed using a python program, so it would be nice if we could shrink that code a bit and make it more straightforward. One key that Ed mentions is that it requires machine language rotate instructions, since C does not have a rotate instruction. I thought I’d try to see what gcc does with the obvious implementation. So, I quickly wrote the following function:

inline uint32_t
rol7(uint32_t val)
{
    return (val << 7) | (val >> 25) ;
}

(This function uses uint32_t to make sure we have 32 bit values. You can find the defines in stdint.h.) As perhaps we should expect, gcc is smart enough to turn this into a single machine language rorl. This means that we can write a move generator without any difference in handling the even or odd rows, which currently doubles at least the size of my own move generator. Pretty nifty.

Two questions regarding the performance of Milhouse…

Just a couple of quick notes for future investigation:

  1. Milhouse uses a windowed search with a “soft-fail” alpha-beta search routine. If the value returned is outside a fairly narrow search range, milhouse re-searches the high region or the low region. These re-searches turn out to be incredibly slow. Not sure that I understand why that should be.
  2. There are still some anomalies in detecting repetitions.

Addendum: Perhaps the first is a failure in move ordering?

How not to write a Sherlock Holmes story…

The late Jeremy Brett, possibly in the truest portrayal of Sherlock Holmes

I interrupt your normally scheduled ham radio and computer checkers postings to frankly just rant about something: I was listening to XM Radio’s old time radio channel, and there was an entirely forgettable Sherlock Holmes story. I’ve previously mentioned that much of the Sherlock Holmes fiction not done by Conan Doyle is crap, but I thought I’d enumerate a list of the things that I think are absolutely fatal to a Sherlock Holmes story.

  1. Romantic involvement for Holmes. Wrong, wrong, wrong, wrong, wrong! Did anyone not read A Scandal in Bohemia? It begins:

    To Sherlock Holmes she is always the woman. I have seldom heard him mention her under any other name. In his eyes she eclipses and predominates the whole of her sex. It was not that he felt any emotion akin to love for Irene Adler. All emotions, and that one particularly, were abhorrent to his cold, precise, but admirably balanced mind.

    Attempts to give Sherlock Holmes romantic interest just make an extraordinary character ordinary.

  2. Misogyny. While Holmes has no real interest in women romantically, he behaves in a largely chivalrous manner. His lack of romantic interest in women is not caused by bitterness: more like a complete lack of emotional understanding. While he may find the motives of women “inscrutable”, he is for the most part rather deferential and charming, until the case is solved at least.
  3. Watson as a bumbler. Watson is a doctor, a writer, and a former soldier. I blame Nigel Bruce for this far-too common portrayal.
  4. Ghosts. While certain Sherlock Holmes stories begin with mysteries that appear supernatural (The Hound of the Baskervilles perhaps being the most obvious example) but in the end, the explanations are always rational. The Hound wasn’t Cerebrus, it was a mastiff covered in phosphorus. Magic and the supernatural have no place in the rationalism of Holmes’ universe. (Conan Doyle’s personal excursions into spiritualism seem not to have crept into the persona of Sherlock Holmes.)
  5. Interactions with either fictional or actual characters from history. Stories which rely on interactions with Mark Twain, H.G. Wells, or Albert Einstein are just contrivances that try to suck you in by dropping names rather than telling stories.
  6. That damned deerstalker hat and pipe.
  7. Saying “Elementary, my dear Watson”. You won’t find the phrase in Conan Doyle’s works, but it has become a cliche in poor imitators.
  8. Stories which overplay the extremes of Holmes’ character, either negative (his drug use, his fits of boredom where he peppered the walls of 221B with his revolver) or his virtue (while mostly chivalrous, he wasn’t above showing impatience and petulance with those he feels unworthy). The beauty of Conan Doyle’s portrayal is at once the extraordinary nature of Holmes’ character, but also the reality of it.

Why did I bother to write this stuff down? No real reason, other than that I really enjoy the canon of fifty six short stories and four novels, and have read them all multiple times. The least of these are better (in my opinion) than 99% of all the subsequent works using these characters. As much as I’d like to complain about how much bad fiction there is out there, there is something truly great: that the characters themselves have passed into the public domain, and are free for all of us who have enjoyed the originals to adapt with new ideas and new adventures.

That’s pretty cool too.

jCheckers Beats Milhouse

After watching a couple of games where Milhouse appeared to get behind, but then pulled out a draw, here’s one where milhouse got behind, stumbled, and lost. I haven’t had the opportunity yet to study it in any detail, but I’ll archive the game here for future analysis:

[Black "jcheckers"]
[White " milhouse"]
[Event "sparring 15 seconds"]
[Result "1-0"]
[Date "unknown"]
1. 11-15 24-19 2. 15x24 28x19 3. 9-14 27-24 4. 8-11 24-20 5. 11-15 32-28 6.
15x24 28x19 7. 4-8 22-18 8. 8-11 18x9 9. 5x14 19-16 10. 12x19 23x16 11. 10-15
25-22 12. 6-10 26-23 13. 14-18 23x14 14. 10x26 30x23 15. 15-19 21-17 16. 19x26
31x22 17. 11-15 16-12 18. 7-11 17-14 19. 15-19 22-17 20. 19-23 14-10 21. 23-26
10-7 22. 3x10 12-8 23. 26-30 8-3 24. 2-7 17-14 25. 10x17 3x10 26. 17-22 10-7
27. 11-15 7-10 28. 15-18 10-14 29. 18-23 14-10 30. 23-27 10-15 31. 27-32
1-0

Addendum: Hmm. This is another game where Milhouse played white, and stumbled into the Double Corner opening. I could probably benefit from an opening book, or at least, I should figure out why it prefers this move in this case, and maybe alter it to not play into this opening.

Addendum2: By the time white plays 7. … 22-18, he’s probably already behind, but 22-18 doesn’t lead to anything good.

jCheckers released

Martin Fierz, author of the truly excellent checkers program Cake, has released a checkers program in Java. I run Cake on my PC, and also at times under Wine on Linux, but it is nice to have a version which can run on Mac/Linux without any hassle.

jCheckers.

For fun, I fired up Milhouse and played a game against jCheckers. I gave each engine five seconds to think (which is pretty low, but it makes the game go fast). Milhouse managed to eek out a draw, but it appears to me that it was lucky: Milhouse appeared to be in a pretty cramped position for much of the game. It seems to me that Milhouse narrowly broke through the log jam at the end, and each side ends up with 4 Kings, which is a drawn position.

[Black "Milhouse"]
[White " jCheckers"]
[Event "First Game Against jCheckers"]
[Result "1/2-1/2"]
[Date "2010-03-28"]
1. 9-14 22-18 2. 10-15 18x9 3. 5x14 25-22 4. 15-19 24x15 5. 11x25 29x22 6. 8-11
28-24 7. 6-10 22-18 8. 1-5 18x9 9. 5x14 26-22 10. 11-15 31-26 11. 3-8 23-18
12. 14x23 26x19 13. 7-11 21-17 14. 11-16 27-23 15. 8-11 17-13 16. 16-20 32-27
17. 11-16 13-9 18. 4-8 30-25 19. 8-11 9-5 20. 2-6 5-1 21. 6-9 25-21 22. 9-14
1-6 23. 14-18 23x7 24. 16x32 6-10 25. 20x27 10x19 26. 27-31 19-23 27. 31-27
23-19 28. 11-16 19-15 29. 16-20 15-19 30. 32-28 22-18 31. 20-24 7-3 32. 27-32
3-8 33. 24-27 8-11 34. 27-31 21-17 
1/2-1/2

Here’s the graphics from jCheckers, showing the final position.

Final Position

Addendum: Here’s one of the interesting position from the game. Cake finds a move which wins a checker, but both jCheckers and Milhouse seem to struggle to search deep enough to find it. Jcheckers played 14. … 27-23 which appears to be a draw, but 14. … 17-13 is a better move, and should win at least a checker.

White to play and win (jCheckers missed it, and so does Milhouse)

Proposed change to Part 97.113

The FCC has a notice of proposed rulemaking whose purpose would be to amend Part 97.113 to allow radio amateurs who are participating in government sponsored drills to be compensated by their employers. I am actually not a fan of this idea. While the Part 97 regulations acknowledge the value of radio amateurs to provide voluntary emergency communications, I think it represents a significant shift in the nature of the amateur service to allow a radio amateur to draw pay (even for emergency preparedness) while doing so. I suspect I’m in the minority, but I went ahead and filed my comments anyway:

My comments regarding Docket 10-72

Moon Bounce Day (Where do I aim the antenna?)

A few years ago when the amateur satellite AO-51 transmitted a beacon message on the 50th anniversary of Sputnik, it was enough to rekindle my interest in amateur radio (which had lain dormant for nearly a decade) and got me working amateur satellites. On April 16, 17 and 18, there is an Echoes of Apollo/Moonbounce day scheduled, which is a celebration of all things radio-lunar. EME or Moonbounce communications has always fascinated me, but has seemed very far out of reach of the small HOA limited antennas that I might be able to deploy. But this year, the radio observatory at Arecibo, Puerto Rico will be active, and its a really, really big antenna. There has been some traffic on the Moon Net mailing list that people have successfully received these signals using smallish yagi antennas, so I think I am going to give it a try.

K1JT posted the predicted hours of operation for Arecibo, so the question became “where will the moon be at that time?” My idea is to tripod mount an 10 or 12 element Yagi and aim it in the rough direction, and try to do some field recording using my FT-817 and a laptop. I was curious as to what elevation I would need, and it would be good to make sure that no buildings obscured my view.

I could have used any of a number of astronomy software programs, or even Google Earth (and Sky) or the Microsoft World Wide Telescope. But I’m a bit of a homebrewer when it comes to calculation, and I like having nice tabular data. So, I rapidly coded up this little python program which uses the PyEphem library:

#!/usr/bin/env python

import sys
import os
import optparse
import ephem
from math import degrees

from datetime import date, datetime

h = ephem.Observer()
m = ephem.Moon()

# K1JT mentioned these as likely times for moonbounce operations
# from the Arecibo Radio Observatory in Puerto Rico...

datetimes = [('2010/4/16 16:45', '2010/4/16 19:30'),
             ('2010/4/17 17:40', '2010/4/17 20:20'),
             ('2010/4/18 18:40', '2010/4/18 21:25')]

# I'm currently in grid CM87ux, which is centered here.

me = ephem.Observer()
me.lat, me.long = ('37.979166666666671', '-122.29166666666666')

moon = ephem.Moon()

print " MOON LOCATION ".center(55, "=")
print ("Observer at %.2f, %.2f" % (degrees(me.lat), degrees(me.long))).center(55, '-')
print

for sd, ed in datetimes:
    sd = ephem.Date(sd)
    ed = ephem.Date(ed)
    me.date = sd
    print "Time (UTC)           Time (Local)            Alt   Az  "
    print "--------------------+---------------------+-----------+"
    while me.date <= ed:
        moon.compute(me)
        print "%02d/%02d/%02d %02d:%02d:%02d |" % me.date.tuple(),
        lt = ephem.localtime(me.date)
        print "%02d/%02d/%02d %02d:%02d:%02d |" % (lt.year, lt.month, lt.day, lt.hour, lt.minute, lt.second),
        print '%4.1f %5.1f|' % (degrees(moon.alt), degrees(moon.az))
        me.date = ephem.Date(me.date + 5 * ephem.minute)
    print "--------------------+---------------------+-----------+"
    print

Running this produced the following table for my home location:

==================== MOON LOCATION ====================
---------------Observer at 37.98, -122.29--------------

Time (UTC)           Time (Local)            Alt   Az
--------------------+---------------------+-----------+
2010/04/16 16:44:59 | 2010/04/16 09:44:59 | 23.8  80.1|
2010/04/16 16:49:59 | 2010/04/16 09:49:59 | 24.7  80.8|
2010/04/16 16:54:59 | 2010/04/16 09:54:59 | 25.7  81.4|
2010/04/16 16:59:59 | 2010/04/16 09:59:59 | 26.6  82.1|
2010/04/16 17:04:59 | 2010/04/16 10:04:59 | 27.6  82.8|
2010/04/16 17:09:59 | 2010/04/16 10:09:59 | 28.5  83.4|
2010/04/16 17:14:59 | 2010/04/16 10:14:59 | 29.5  84.1|
2010/04/16 17:19:59 | 2010/04/16 10:19:59 | 30.4  84.8|
2010/04/16 17:24:59 | 2010/04/16 10:24:59 | 31.4  85.5|
2010/04/16 17:29:59 | 2010/04/16 10:29:59 | 32.4  86.2|
2010/04/16 17:34:59 | 2010/04/16 10:34:59 | 33.3  86.9|
2010/04/16 17:39:59 | 2010/04/16 10:39:59 | 34.3  87.6|
2010/04/16 17:44:59 | 2010/04/16 10:44:59 | 35.3  88.3|
2010/04/16 17:49:59 | 2010/04/16 10:49:59 | 36.2  89.0|
2010/04/16 17:54:59 | 2010/04/16 10:54:59 | 37.2  89.7|
2010/04/16 17:59:59 | 2010/04/16 10:59:59 | 38.1  90.4|
2010/04/16 18:04:59 | 2010/04/16 11:04:59 | 39.1  91.2|
2010/04/16 18:09:59 | 2010/04/16 11:09:59 | 40.1  91.9|
2010/04/16 18:14:59 | 2010/04/16 11:14:59 | 41.0  92.7|
2010/04/16 18:19:59 | 2010/04/16 11:19:59 | 42.0  93.5|
2010/04/16 18:24:59 | 2010/04/16 11:24:59 | 43.0  94.3|
2010/04/16 18:29:59 | 2010/04/16 11:29:59 | 43.9  95.1|
2010/04/16 18:34:59 | 2010/04/16 11:34:59 | 44.9  95.9|
2010/04/16 18:39:59 | 2010/04/16 11:39:59 | 45.8  96.7|
2010/04/16 18:44:59 | 2010/04/16 11:44:59 | 46.8  97.6|
2010/04/16 18:49:59 | 2010/04/16 11:49:59 | 47.8  98.5|
2010/04/16 18:54:59 | 2010/04/16 11:54:59 | 48.7  99.4|
2010/04/16 18:59:59 | 2010/04/16 11:59:59 | 49.7 100.3|
2010/04/16 19:04:59 | 2010/04/16 12:04:59 | 50.6 101.2|
2010/04/16 19:09:59 | 2010/04/16 12:09:59 | 51.6 102.2|
2010/04/16 19:14:59 | 2010/04/16 12:14:59 | 52.5 103.2|
2010/04/16 19:19:59 | 2010/04/16 12:19:59 | 53.4 104.2|
2010/04/16 19:24:59 | 2010/04/16 12:24:59 | 54.4 105.3|
2010/04/16 19:29:59 | 2010/04/16 12:29:59 | 55.3 106.4|
--------------------+---------------------+-----------+

Time (UTC)           Time (Local)            Alt   Az
--------------------+---------------------+-----------+
2010/04/17 17:39:59 | 2010/04/17 10:39:59 | 24.9  78.1|
2010/04/17 17:44:59 | 2010/04/17 10:44:59 | 25.9  78.7|
2010/04/17 17:49:59 | 2010/04/17 10:49:59 | 26.8  79.4|
2010/04/17 17:54:59 | 2010/04/17 10:54:59 | 27.7  80.0|
2010/04/17 17:59:59 | 2010/04/17 10:59:59 | 28.7  80.6|
2010/04/17 18:04:59 | 2010/04/17 11:04:59 | 29.6  81.3|
2010/04/17 18:09:59 | 2010/04/17 11:09:59 | 30.6  81.9|
2010/04/17 18:14:59 | 2010/04/17 11:14:59 | 31.5  82.6|
2010/04/17 18:19:59 | 2010/04/17 11:19:59 | 32.5  83.3|
2010/04/17 18:24:59 | 2010/04/17 11:24:59 | 33.4  83.9|
2010/04/17 18:29:59 | 2010/04/17 11:29:59 | 34.4  84.6|
2010/04/17 18:34:59 | 2010/04/17 11:34:59 | 35.3  85.3|
2010/04/17 18:39:59 | 2010/04/17 11:39:59 | 36.3  86.0|
2010/04/17 18:44:59 | 2010/04/17 11:44:59 | 37.3  86.6|
2010/04/17 18:49:59 | 2010/04/17 11:49:59 | 38.2  87.3|
2010/04/17 18:54:59 | 2010/04/17 11:54:59 | 39.2  88.0|
2010/04/17 18:59:59 | 2010/04/17 11:59:59 | 40.1  88.7|
2010/04/17 19:04:59 | 2010/04/17 12:04:59 | 41.1  89.5|
2010/04/17 19:09:59 | 2010/04/17 12:09:59 | 42.1  90.2|
2010/04/17 19:14:59 | 2010/04/17 12:14:59 | 43.0  90.9|
2010/04/17 19:19:59 | 2010/04/17 12:19:59 | 44.0  91.7|
2010/04/17 19:24:59 | 2010/04/17 12:24:59 | 44.9  92.5|
2010/04/17 19:29:59 | 2010/04/17 12:29:59 | 45.9  93.3|
2010/04/17 19:34:59 | 2010/04/17 12:34:59 | 46.9  94.1|
2010/04/17 19:39:59 | 2010/04/17 12:39:59 | 47.8  94.9|
2010/04/17 19:44:59 | 2010/04/17 12:44:59 | 48.8  95.7|
2010/04/17 19:49:59 | 2010/04/17 12:49:59 | 49.7  96.6|
2010/04/17 19:54:59 | 2010/04/17 12:54:59 | 50.7  97.4|
2010/04/17 19:59:59 | 2010/04/17 12:59:59 | 51.6  98.3|
2010/04/17 20:04:59 | 2010/04/17 13:04:59 | 52.6  99.3|
2010/04/17 20:09:59 | 2010/04/17 13:09:59 | 53.5 100.2|
2010/04/17 20:14:59 | 2010/04/17 13:14:59 | 54.5 101.2|
2010/04/17 20:19:59 | 2010/04/17 13:19:59 | 55.4 102.2|
--------------------+---------------------+-----------+

Time (UTC)           Time (Local)            Alt   Az
--------------------+---------------------+-----------+
2010/04/18 18:40:00 | 2010/04/18 11:40:00 | 25.8  77.7|
2010/04/18 18:45:00 | 2010/04/18 11:45:00 | 26.7  78.3|
2010/04/18 18:49:59 | 2010/04/18 11:49:59 | 27.6  79.0|
2010/04/18 18:54:59 | 2010/04/18 11:54:59 | 28.6  79.6|
2010/04/18 18:59:59 | 2010/04/18 11:59:59 | 29.5  80.3|
2010/04/18 19:04:59 | 2010/04/18 12:04:59 | 30.4  80.9|
2010/04/18 19:09:59 | 2010/04/18 12:09:59 | 31.4  81.6|
2010/04/18 19:14:59 | 2010/04/18 12:14:59 | 32.3  82.2|
2010/04/18 19:19:59 | 2010/04/18 12:19:59 | 33.3  82.9|
2010/04/18 19:24:59 | 2010/04/18 12:24:59 | 34.2  83.6|
2010/04/18 19:29:59 | 2010/04/18 12:29:59 | 35.2  84.2|
2010/04/18 19:34:59 | 2010/04/18 12:34:59 | 36.1  84.9|
2010/04/18 19:39:59 | 2010/04/18 12:39:59 | 37.1  85.6|
2010/04/18 19:44:59 | 2010/04/18 12:44:59 | 38.0  86.3|
2010/04/18 19:49:59 | 2010/04/18 12:49:59 | 39.0  87.0|
2010/04/18 19:54:59 | 2010/04/18 12:54:59 | 39.9  87.7|
2010/04/18 19:59:59 | 2010/04/18 12:59:59 | 40.9  88.4|
2010/04/18 20:04:59 | 2010/04/18 13:04:59 | 41.9  89.1|
2010/04/18 20:09:59 | 2010/04/18 13:09:59 | 42.8  89.8|
2010/04/18 20:14:59 | 2010/04/18 13:14:59 | 43.8  90.6|
2010/04/18 20:19:59 | 2010/04/18 13:19:59 | 44.7  91.3|
2010/04/18 20:24:59 | 2010/04/18 13:24:59 | 45.7  92.1|
2010/04/18 20:29:59 | 2010/04/18 13:29:59 | 46.6  92.9|
2010/04/18 20:34:59 | 2010/04/18 13:34:59 | 47.6  93.7|
2010/04/18 20:39:59 | 2010/04/18 13:39:59 | 48.6  94.5|
2010/04/18 20:44:59 | 2010/04/18 13:44:59 | 49.5  95.3|
2010/04/18 20:49:59 | 2010/04/18 13:49:59 | 50.5  96.2|
2010/04/18 20:54:59 | 2010/04/18 13:54:59 | 51.4  97.1|
2010/04/18 20:59:59 | 2010/04/18 13:59:59 | 52.4  98.0|
2010/04/18 21:04:59 | 2010/04/18 14:04:59 | 53.3  98.9|
2010/04/18 21:09:59 | 2010/04/18 14:09:59 | 54.3  99.8|
2010/04/18 21:14:59 | 2010/04/18 14:14:59 | 55.2 100.8|
2010/04/18 21:19:59 | 2010/04/18 14:19:59 | 56.1 101.8|
2010/04/18 21:24:59 | 2010/04/18 14:24:59 | 57.1 102.9|
--------------------+---------------------+-----------+

From this, I determined that I could probably mount the antenna at a fixed 30 degree elevation, aimed mostly east, and I’d likely (given the width of the main lobe of my proposed antenna) be good to go, without any additional guiding.

Now, I just need to get the antenna constructed.

Classic Board Games for Computer Implementation?

So, my experiments with my checkers program Milhouse have been fun and interesting. There is still work to be done: I don’t think a machine that can’t properly play first position can be reasonably said to be a good checkers player (even though I still make errors while practicing it myself against Cake), but I’ve learned a lot. I’ve been toying with the possibility of doing a rewrite of Milhouse, but maybe that isn’t the best use of my precious after-hours time. For the last couple of days, I’ve been musing about the possibility of finding other strategic board games that might be fun to try. Here is a rough list of my internal criteria:

  1. It should be a game that people play. It would be great if there was some existing scholarship on human play, including test positions.
  2. It should be a game which is uncopyrighted and free from legal restrictions for use. Hence my use of the term “classic”.
  3. It should be reasonably hard. Trivial games like Tic Tac Toe need not apply.

There are some additional possibilities..

  1. I’m willing to consider games with a chance component (such as dice in backgammon) or a gambling component (such as the doubling cube in the same).
  2. I prefer games which are more abstract, whose difficulties arise from position and interaction more than complicated rules.

So, what games are out there?

Well, obviously there is chess and its variants, including Shoji, Xiangqi and a slew of modern variants. Originally I had a goal of implementing chess, but I must admit that it doesn’t appeal to me quite as much now largely because I view it as less abstract than games like checkers. The difficulty of First Position arises not because of complex interactions between different piece types, but because of the interplay of very simple elements. Still, I have considered the possibility of taking one of the variants of chess that are played on small boards perhaps a 5×6 board and developing opening and endgame databases for these. I haven’t done the math on what it would take to resolve the game theoretic value of one of these small games, but it might be tractable with the computing power that I have at my disposal.

There are obviously variants to checkers, including international draughts and the misere variants, also known as suicide checkers. The idea is that the person who loses all their pieces first wins. Martin Fierz has a version of Cake called Suicidal Cake which can play suicide checkers, which is an interesting variant with many of its own subtle problems. International draughts is a much more complicated game, with a larger board and some really interesting rules for jumping. There are other variants as well (Polish, Turkish, Italian) which could be considered as strong candidates.

Othello or Reversi are interesting games of position, and which can use many of the techniques we see in checkers and chess (namely opening books and endgame databases). There is also a fair amount of human knowledge about the game, which is good.

Backgammon introduces chance into the mix, which generally means that you need to use a different set of techniques than we’ve seen so far. I also like to play backgammon (although am only a middling level player). Backgammon has been the subject of quite a bit of machine learning research, and there exists many implementations to spar against.

Hex is a very nearly perfect abstract game: players alternate turns to connect their side of the board to the opposite side. There are no draws possible. The entire games can be displayed as a single graphic, which shows the order in which stones were placed on the board. The high branching factor makes traditional alpha-beta search less attractive.

Go has many of the same characteristics, with the added bonus of having lots of good human players. There has been a lot of interesting research recently about playing Go.

Arimaa was designed to be hard for computers to play, but frankly, I think that it does so only by making a rule system which is complicated. It seems between the push, pull, freeze and trap rules, there are at least two and maybe three that could have been eliminated. I also suspect that while it is (according to the authors) “easy to learn”, I suspect that really only means “easy to learn the rules”. I would make the (unsupported) claim that most human play is quite weak as well.

Games like Awari or other games of the mancala family have been studied quite extensively. Connect 4 has been solved, but is still middling interesting.

Any other board games I’m missing? I await your suggestions and/or advocacy.

Milhouse vs. the iPhone

I woke up at 5:30 or so this morning, and couldn’t get back to bed, so I thought some more tinkering with Milhouse was warranted. Musing about yesterday’s reading, I realized that I had a bit more work to really try it out: I’d need to write a code to handle regression learning. That seemed like too much, so I thought I’d play a quick game against Optime software’s Checkers Free for the iPhone. The program is little better than random play on its lowest setting: here’s a pdn file for the first game I played against it:

[Comment "Sparring Match Between Milhouse and iPhone"]
[Black "Milhouse"]
[White "iPhone"]
[Date "Sun Mar 21 09:25:13 PDT 2010"]
[Result "1-0"]
1. 11-16 22-18
2. 10-14 24-20
3. 16-19 23x16
4. 12x19 27-23
5. 8-12 23x16
6. 12x19 26-23
7. 19x26 31x22
8. 14x23 32-27
9. 23x32 30-26
10. 4-8 28-24
11. 6-10 26-23
12.  32-28 21-17
13. 28x26 22-18
14. 9-14 18x9
15. 5x30 20-16
16. 7-11 16x7
17. 2x11 29-25
18. 30x21

Milhouse brutalized it. Checkers Free goes off the rails at this rather elementary position at move 7…

White to move, with one move possibly equalizing, the other, to direct disaster.

The iPhone played 31×22, which is hideous: yes, you are exchanging down, but white’s position becomes horribly pinned, and you can’t help but give up material. Milhouse sees this easily in just 5 ply search.

Addendum: Cranking the iPhone’s settings up to maximum yielded a slightly better game, although still with a fairly lop sided outcome. I didn’t bother playing this one out to the end, while White has two kings, it basically can’t prevent red from marching down and getting a truly overwhelming advantage:

[Comment "Sparring Match Between Milhouse and iPhone on Highest Level"]
[Black "Milhouse"]
[White "iPhone"]
[Date "Sun Mar 21 10:30:56 PDT 2010"]
[Result "1-0"]
1. 11-16 23-18 2. 7-11 26-23 3. 16-20 24-19 4. 11-16 22-17 5. 9-14
18x9 6. 5x14 25-22 7. 8-11 29-25 8. 11-15 28-24 9. 3-8 31-26 10. 8-11
17-13 11. 6-9 13x6 12. 2x9 22-17 13. 9-13 25-22 14. 4-8 23-18 15. 14x23
27x18 16. 16x23 26x19 17. 20x27 32x23 18. 15x24 18-14 19. 10-15 14-10
20. 24-27 10-7 21. 27-31 23-18 22. 31-27 7-2 23. 15-19 30-26 24. 27-31
18-14 25. 31-27 14-10 26. 11-16 10-7 27. 27-23 7-3 28. 23x30 17-14
29. 30-25 22-18 30. 25-22 3-7 31. 22x15 14-10 32. 15x6 2x9

A fairly interesting position occurs for white on move 10. With white to play:

White to move: the iPhone played 31-26, but a better move exists...

Milhouse prefers 17-13, but only narrowly until the search depth approaches 20 ply, then it realizes that it might actually be a drawing move. I haven’t worked this out too much with Cake, but checking briefly, it seems that after 31 ply search, it sees the position as essentially drawn. Milhouse would take probably the better part of an hour to reach that search depth.

Can I derive an adequate evaluation function for Milhouse through random play?

As I have mentioned from time to time, I have an implementation of a middling checkers player program that I’ve called Milhouse. Every few months I take time out to play with it and decide what I can do to make it better. Recently I’ve been pondering the possibility of implementing drop-out expansion to make a good opening book, and I’ve also been pondering how I might make further additions to its evaluation function. Sadly, improving its evaluation function requires that I actually understand more about the subtleties of checkers, and as yet, my own study of this fascinating game have been sporadic.

There have been a number of papers suggesting schemes for automatically generating heuristics for games. Most of the ones I’ve read have been based upon machine learning, and are sufficiently complicated that I haven’t had the time or energy to take a look at them. But this evening I stumbled upon the following paper by Abramson and Korf which suggested a different scheme which seems like it could be interesting to experiment with:

A Model of Two Player Evaluation Functions

The basic idea is that the evaluation of a node can be defined as the expected value of games played randomly from the terminal position. In other words, you can determine the value of a position by playing a bunch of random games from that position, and returning the estimated expected value.

This seems like a bad idea on the face of it: for example consider the classic First Position from checkers:

White to move, and win, but Milhouse flounders...

This position requires really deep search: in fact, even with an endgame WLD database, my program has difficulty actually making headway. It seems all moves as winning, and therefore all moves as having virtually the same value, which means that it searches a lot of nodes.

Martin Fierz’s Cake program plays this position quite well. It takes a while to resolve, the first capture occurs some 56 ply deep in the search.

[Event ""]
[Date ""]
[Black ""]
[White ""]
[Result "*"]
[Setup "1"]
[FEN "W:WK19,K18:BK32,3."]
1. 19-23 32-28 2. 23-27 28-32 3. 18-23 3-8 4. 27-24 32-28 5. 24-19 28-32 6. 23-26 32-28 7. 
26-31 28-32 8. 19-16 8-12 9. 16-19 32-28 10. 31-27 28-32 11. 19-23 32-28 12. 27-32 28-24 
13. 23-18 24-28 14. 18-22 28-24 15. 32-28 24-19 16. 22-18 12-16 17. 28-32 19-24 18. 18-15 
16-20 19. 15-11 24-19 20. 32-27 19-24 21. 27-23 24-28 22. 11-15 28-24 23. 15-18 24-28 24. 
23-19 28-32 25. 18-23 32-28 26. 23-27 28-32 27. 19-23 20-24 28. 27x20 32-28 29. 23-19 
28-32 30. 20-24 32-28 31. 24-27 28-32 32. 19-23 32-28 33. 27-32 28-24 34. 32-28 24-20 35. 
23-18 20-16 36. 18-15 16-20 37. 15-11 20-24 38. 28x19 *

But this paper points out something interesting: to be useful, the program only needs to differentiate better moves from worst moves at the search horizon. And, the fact of the matter is that random play might very well be a good estimate of the relative value of the two positions.

Abramson and Korf did some experiments using the game Othello which compared using the expected outcome of random play as a heuristic value, and compared it with a standard static evaluation function, the worst possible choice, and a complete minimax search from the node (which was the true value of the game). It was found that using expected outcome was reasonable, and made fewer errors than their common static evaluator.

So here’s my question: is this applicable to endgames in checkers? At the time the paper was written, there weren’t large endgame databases for checkers, but now I have a copy of the WLD database for all positions with < 8 checkers. I can quite easily run this experiment, and compare my own static evaluation function to the evaluation function achieved by looking at the expected outcome from random play, and compare them to the actual game theoretic value of the position.

I estimate that it’ll take a few hours of coding to set this up. I’ll report back on the results sometime in the next couple of weeks.

Addendum: I arrived here actually by looking at research on Go, a game which I understand even less than Checkers, but which has seen some interesting research in the last few years, particularly in the idea of “Monte Carlo” search: using random play to evaluate game positions. I’m not so interested in Go, but am somewhat interested in Hex, which has similar characteristics: namely, very high branching factor which makes deep searching hard, and very large configurations during the endgame which prevents straightforward endgame databases. I’d like to write a Hex program someday.

A Shorter Quine in Python

Quines are programs that when run produce themselves as output. Previously I had written a fairly longish one in Python. I was reading Russ Cox’s latest blog article (more skimming than reading actually, it’s neatly subtle) but it said something obvious that I hadn’t clicked on before: the repr function takes a value, and converts it to a string value that is its representation. Most of the difficulty in making a short quine is in picking a representation which is compact. Since repr is a builtin function, that makes it very compact. A few minutes of tinkering resulted in this two line quine:

#!/usr/bin/env python
s = '#!/usr/bin/env python\ns = %s ; print s %% repr(s)' ; print s % repr(s)

It could actually be made shorter and fit on one line, but I think most good programs should specify the interpreter they use. That necessitates a second line.

Russ goes on to do something truly remarkable: a zip or tar.gz file which when unzipped produces a new copy of itself. I am gonna ponder that a bit more and maybe work on my own version.

Building Yagi Antennas

I previously made a half-assed “cheap yagi” just using some aluminum wire and a scrap of wood from the garage, but I’ve been thinking of making something a bit more permanent. What I needed was some practical information on tools and techniques, and this is so far the best article that I managed to find, either online or in my collection of books.

Saved for next weekend, hopefully.

buildinguhfyagis.pdf application/pdf Object.

Can I ever stop doing math?

I’m still trying to shake the worst of a cold, so the XBox 360 is getting a bit of a workout. I usually only play video games when I’m sick and/or tired, just as some relaxation and diversion. Games which are too involving, requiring long quests aren’t really in the mix for the most part, since I don’t really feel like dedicating that much time to them.

Nevertheless, Carmen picked me up a copy of Mass Effect, what is essentially a space opera based role-playing game, where you play a hero trying recover an alien artifact, yada yada… it’s reasonably well done, but the game play has a bit too much running around on side quests for my taste. And, like virtually every RPG type game, the majority of your time seems to be spent trying to find/buy upgrades to your skills and equipment. Honestly, this kind of game usually doesn’t appeal to me very much: it’s like working a job. But this game is, I must admit, better and more imaginative of its genre, so I am about four or five hours into it.

But back to the math:

Inside the game there is a casino called “Flux” which houses a bunch of machines upon which you can play a casino game called Quasar. It’s sort of like a stripped down version of Blackjack. The idea is that you try to get a score as close to 20 as you can (without going over). In each step, you can either a) hold with what you got b) choose the X strategy, which adds a number from 4-7 inclusive to your count, or b) choose the Y strategy, which adds a number from 1-8 to your total. The game costs 200 quatloos (or whatever) to play, and payouts vary according to your total.

Total Payout
15 50
16 100
17 200
18 250
19 300
20 400

(The reality of the game is that you actually can’t hold on any total less that 15, but since that strategy would always guarantee the maximum possible loss in all situations, it hardly matters in determining the optimal solution.

Anyway, I wrote a little Python script to compute the optimal strategy, which can be summarized
as follows:

If your total is 17 or greater, then you hold and collect your winnings.
If your total is 16, choose strategy Y.
If your total is 15, choose strategy X.
if your total is 1, 2, 6, 7, 8, 13 or 14, pick strategy X.
Otherwise, pick strategy Y.

It isn’t clear to me how the game picks your initial count. In the ten or 15 minutes that I’ve played the game, it seems to me that the initial count is perhaps uniformly distributed between 1 and 5 (inclusive). This amounts to an average net profit to the player of about 40 quatloos per round. Hence, it takes about 25 rounds to make 1000 quatloos.

The best numbers to get are 13, 7, 12, 6, and 1. The worst? 16, 15, 9, 10, 11, although only 16 and 15 have a negative expectation.

I’m doped up with cough medicine, have a sinus headache, and yet here I am, writing programs and doing math instead of playing a video game. I must be crazy.

Addendum: There are many sites that also derived the same strategy, like this one. Google for more if you’d like.