|Is Chess with Queen Odds a Provable Win?
||[Jan. 15th, 2011|09:57 pm]
What is the result of a perfect chess game?
That is perhaps the best-known example of a mathematical question that is well-defined and in principle soluble in a finite number of steps, and yet not only beyond the abilities of what our computers can currently calculate, but is likely to remain so forever, at least via any known algorithmic approach.
Where machines fall short, we still have human intuition and experience. Most chess experts believe that chess is a draw, but probably wouldn't bet their life on it. Some Masters have expressed support for the possibility that chess might be a win for White. But at the very least, almost everyone would agree that chess is not a win for Black — for that would mean that the opening position is a Zugzwang — and I for one would bet my life on it (not that I expect the question to be decided within my lifetime).
So we have a hypothesis that is a virtual certainty, namely that chess is either a draw or a win for White, but for which we have absolutely no clue as to proof techniques. Of course, there is no shortage of examples of this from the realm of the infinite, such as the twin prime conjecture, but there is something even more unsatisfying about finite problems that are undecidable in practice.
What if we make things easier for the machine? It is obvious to a rank beginner that a perfect game with a rook handicap is a win for the side with the material advantage. No, make it a queen! Surely that must be a provable win?
Not so fast. Even against a crushing asymmetry in material, it is not too hard to avoid mate for a couple of dozen moves, which means that calculating all the way to the end of the game is beyond the reach of search-based algorithms.
However, it is possible that completely different methods, perhaps more in the realm of theorem-proving than search, or a hybrid of the two, might be successful. As far as I know, there is no serious research program to investigate questions of this nature. And I'd like to spend the rest of this post talking about why that is.
In many ways, the much-publicized and celebrated victory of IBM's Deep Blue over Kasparov in 1997 was acutally a tragedy for game-playing research. The popular perception of the result was that chess had in some sense been cracked by machines; game research lost its glamor, and funding dried up. In reality, all that had happened was that chess programs had gotten really good at searching game trees and that computers had gotten faster.
Indeed, if you start from a closed position where strategy dominates and tactics are of relatively little use, top human players can still trounce computers. (Of course, the machines are programmed not to get into such positions in the first place in the course of a regular game.) The point is that chess-playing programs in no way duplicate human abilities at chess, and there is a lot of scope for further research.
Moving on, Kasparov points out the intriguing and hugely counterintuitive results of games between human–machine teams, and laments that this avenue has not been explored further. Yet another area largely ignored by game researchers is games like arimaa, similar in broad respects to chess but specifically constructed to be hard for machines. Sure enough, there is a vast difference between human and machine abilities, and in fact, humans have been getting stronger at arimaa relative to computers!
In short, it is only a minor exaggeration to say that game-playing research died just as it was about to get interesting.
I have hopes that there will be a revival one day. Perhaps as "general intelligence" inevitably becomes more sophisticated, chess programs that don't do any explicit search will come into vogue as a showcase for generic reasoning ability. What I hope I've done in this post is to expose a potentially new problem domain — proving wins in easily won positions — which, if pursued seriously, might lead to progress in underdeveloped areas of machine intelligence.