C&B Notes

Trying to Conquer the Game of Go

Nearly twenty years ago, artificial intelligence conquered chess when IBM’s Deep Blue defeated Gary Kasparov.  Humans still have the upper hand in Go, the ancient Eastern version of chess.  Computer programs struggle to keep broader board positioning in mind while effectively dealing with the sheer number of possible moves on the 19-by-19 grid.

Even in the West, Go has long been a favorite game of mathematicians, physicists, and computer scientists.  Einstein played Go during his time at Princeton, as did mathematician John Nash.  Seminal computer scientist Alan Turing was a Go aficionado, and while working as a World War II code-breaker, he introduced the game to fellow cryptologist I.J. Good.  Now known for contributing the idea of an “intelligence exposition” to singularity theories — predictions of how machines will become smarter than people — Good gave the game a huge boost in Europe with a 1965 article for New Scientist entitled “The Mystery of Go.”  Good opens the article by suggesting that Go is inherently superior to all other strategy games, an opinion shared by pretty much every Go player I’ve met.  “There is chess in the western world, but Go is incomparably more subtle and intellectual,” says South Korean Lee Sedol, perhaps the greatest living Go player and one of a handful who make over seven figures a year in prize money.  Subtlety, of course, is subjective.  But the fact is that of all the world’s perfect information games — tic-tac-toe, chess, checkers, Othello, xiangqi, shogi — Go is the only one in which computers don’t stand a chance against humans.

This is not for lack of trying on the part of programmers, who have worked on Go alongside chess for the last fifty years, with substantially less success.  The first chess programs were written in the early fifties, one by Turing himself.  By the 1970s, they were quite good.  But as late as 1962, despite the game’s popularity among programmers, only two people had succeeded at publishing Go programs, neither of which was implemented or tested against humans.  Finally, in 1968, computer game theory genius Alfred Zobrist authored the first Go program capable of beating an absolute beginner.  It was a promising first step, but notwithstanding enormous amounts of time, effort, brilliance, and quantum leaps in processing power, programs remained incapable of beating accomplished amateurs for the next four decades.

To understand this, think about Go in relation to chess.  At the beginning of a chess game, White has twenty possible moves.  After that, Black also has twenty possible moves.  Once both sides have played, there are 400 possible board positions.  Go, by contrast, begins with an empty board, where Black has 361 possible opening moves, one at every intersection of the 19 by 19 grid.  White can follow with 360 moves.  That makes for 129,960 possible board positions after just the first round of moves.  The rate at which possible positions increase is directly related to a game’s “branching factor,” or the average number of moves available on any given turn. Chess’s branching factor is 35.  Go’s is 250.  Games with high branching factors make classic search algorithms like minimax extremely costly.  Minimax creates a search tree that evaluates possible moves by simulating all possible games that might follow, and then it chooses the move that minimizes the opponent’s best-case scenario.  Improvements on the algorithm — such as alpha-beta search and null-move — can prune the chess game tree, identifying which moves deserve more attention and facilitating faster and deeper searches.  But what works for chess — and checkers and Othello — does not work for Go.

The trouble is that identifying Go moves that deserve attention is often a mysterious process.  “You’ll be looking at the board and just know,” Redmond told me, as we stood in front of the projector screen watching Crazy Stone take back Nomitan’s initial lead.  “It’s something subconscious, that you train through years and years of playing.  I’ll see a move and be sure it’s the right one, but won’t be able to tell you exactly how I know.  I just see it.”

Similarly inscrutable is the process of evaluating a particular board configuration.  In chess, there are some obvious rules.  If, ten moves down the line, one side is missing a knight and the other isn’t, generally it’s clear who’s ahead.  Not so in Go, where there’s no easy way to prove why Black’s moyo is large but vulnerable, and White has bad aji.  Such things may be obvious to an expert player, but without a good way to quantify them, they will be invisible to computers.  And if there’s no good way to evaluate intermediate game positions, an alpha-beta algorithm that engages in global board searches has no way of deciding which move leads to the best outcome.

Not that it matters: Go’s impossibly high branching factor and state space (the number of possible board configurations) render full-board alpha-beta searches all but useless, even after implementing clever refinements.  Factor in the average length of a game — chess is around 40 turns, Go is 200 —and computer Go starts to look like a fool’s errand.


According to University of Sydney cognitive scientist and complex systems theorist Michael Harré, professional Go players behave in ways that are incredibly hard to predict.  In a recent study, Harré analyzed Go players of various strengths, focusing on the predictability of their moves given a specific local configuration of stones.  “The result was totally unexpected,” he says.  “Moves became steadily more predictable until players reached near-professional level. But at that point, moves started getting less predictable, and we don’t know why.  Our best guess is that information from the rest of the board started influencing decision-making in a unique way.”

This could mean that computer programs will eventually hit another wall.  It may turn out that the lack of progress experienced by Go programs in the last year is evidence of yet another qualitative division, the same one that divides amateurs from professionals.  Should that be the case, another breakthrough on the level of the Monte Carlo Tree Search could be necessary before programs can challenge pros.

I was surprised to hear from programmers that the eventual success of these programs will have little to do with increased processing power.  It is still the case that a Go program’s performance depends almost entirely on the quality of its code.  Processing power helps some, but it can only get you so far.  Indeed, the UEC lets competitors use any kind of system, and although some opt for 2048-processor-core super-computers, Crazy Stone and Zen work their magic on commercially available 64-core hardware.

Even more surprising was that no programmers think of their creations as “intelligent.”  “The game of Go is spectacularly challenging,” says Coulom, “but there is nothing to do with making a human intelligence.”  In other words, Watson and Crazy Stone are not beings.  They are solutions to specific problems.  That’s why its inaccurate to say that IBM Watson will be used to fight cancer, unless playing Jeopardy helps reduce tumors.   Developing Watson might have led to insights that help create an artificial diagnostician, but that diagnostician isn’t Watson, just as MCTS programs used in hospital planning are not Crazy Stone.