Pondering computer chess…

August 20, 2014 | Computer Chess | By: Mark VandeWettering

At the risk of name dropping, on my flight out to Vancouver for SIGGRAPH last week, I had the exceedingly high luck of scoring a seat next to Pat Hanrahan. 25 years ago, I was working at Princeton in the Applied Math department, and the single smartest thing I did was make Pat’s acquaintance. Besides providing countless insights into computer graphics over lunch and chats, he helped me score my current job, where I’ve been gainfully employed for the last 23 years. He claims that he occasionally even reads my blog, so if you are reading this Pat, thanks a million!

During the two hour flight, our chat ranged over a wide variety of topics. One topic that is completely unrelated to my work is my interest in computer chess and checkers, mainly as applications of game tree search. Even as an undergraduate, I was fascinated by this topic, but when Deep Blue beat Kasparov 2-1 in a six game series in 1997, I kind of pushed this to the back of my mind. I mean, I thought it was over.

I’ve made this mistake before.

What’s amazing is that computer chess programs have gotten better recently. And not just a little better, a lot better. One particularly interesting chess program is Stockfish. Firstly, it is an open source project, which means that its innards are available for your inspection. Secondly, it is available on a wide variety of platforms, including as a free application on the iPhone. I interrupt this diatribe to show the game I played against Stockfish on my return flight. I don’t play very often, but managed to eke out a shaky draw against Stockfish with it taking 10 seconds per move. I only lost concentration once and stumbled into an obvious blunder (which I shamelessly took back and went at for another try). Here’s the game, using a spiffy WordPress plugin.

Anyway, the third thing that I thought was cool about Stockfish was that stockfish is really good. It’s clear that it would crush all human players in a match: it’s ranked about 400 points higher than Magnus Carlsen, which means that Stockfish 5 would be expected to score about 90% against Carlsen. I didn’t think that this increase in the state of the art could have been done purely as the result of CPU speed improvements, so I wanted to look into it a bit and see what might have helped Stockfish get so good.

Interestingly, I think one of the greatest causes is from exhaustive testing. The Stockfish project has implemented a distributed testing facility called Fishhtest. The idea is pretty simple: volunteers contribute cpu time to exhaustive test commits to the source tree to see the effect on gameplay You can read more about it here.. According to the Wikipedia article on Stockfish, this allowed Stockfish to gain 120 ELO points in just 12 months.

Anyway, my chats with Pat and pondering some of the ideas from Stockfish make me want to dust off my Milhouse checkers program, and see if I can’t borrow some ideas from Stockfish as well as other ideas from Pat (implementing checkers on an FPGA?). We’ll see what happens.

Share Button
Be Sociable, Share!

Write a comment