I've been thinking about improving the evaluation function that milhouse uses, and have been particularly interested in automatic tuning of these functions. The two obvious contenders as far as methods go are the TDLeaf algorithm and some kind of evolutionary approach. So, as I was reading Chellapilla and Fogel's paper "Evolving an Expert Checkers Playing Program without Using Human Expertise", and pondered it a bit. In the paper, they list a game which was listed as "Played Against Human Rated 2173", which yielded the following position at move number nine:
White +--------+ | - w w w| |w w - - | | w w - w| |- w - - | | - r r w| |r r r - | | - r r -| |- r r r | +--------+ Red
The human "expert" picked 11-16 here. It's not a very good move. In fact, it loses a checker almost immediately. The response is 24-20, and the computer gets the better of a bunch of forced exchanges.
Later, at turn 17 Red, our human is faced with this position:
White +--------+ | - - - -| |- w w w | | w w - w| |r - - - | | - r r -| |- w r w | | - - - r| |- r r - | +--------+ Red
and selects 3-7, which is another dog. The human in this case blundered into a pair of pretty obvious exchanges, which can be found with even really relatively shallow searches and even the most modest evaluation functions. In other words, this position doesn't actually illustrate any deep understanding of the positions involved. I've heard some pretty stiff criticism of this before, but this is the first time I actually worked through the position.