Go is a board game, originally from Asia, that is played on a board with 19 x 19 lines. Two players take turns placing stones (one player gets white, the other black) on the intersections of the lines. The goal is to create a territory; the space delimited by your stones. At the end of the game, you count up the points (intersections) in your territory, and add any stones you have captured (you can capture stones by surrounding them, and removing them from the board). The person with the highest score wins.
Go is in incredibly complicated game. Because there are so many points on the board (361), there are this many legal positions for a game:
208168199381979984699478633344862770286522453884530548425
639456820927419612738015378525648451698519643907259916015
628128546089888314427129715319317557736620397247064840935
(This means positions where stones are allowed to play according to the rules. And I’ve added line breaks so the number doesn’t stretch out off the side of the page.)
It’s very hard to write an AI for go. While chess is relatively easy to beat, because there are only 64 squares, and the game is much simpler, go has long been hard to solve.
Google’s AlphaGo project has made a brilliant breakthrough recently, defeating Fan Hui, the European champion 5 games to 0. The Google page explains how complicated it was to develop their AI:
But as simple as the rules are, Go is a game of profound complexity. The search space in Go is vast — more than a googol times larger than chess (a number greater than there are atoms in the universe!). As a result, traditional “brute force” AI methods — which construct a search tree over all possible sequences of moves — don’t have a chance in Go. To date, computers have played Go only as well as amateurs. Experts predicted it would be at least another 10 years until a computer could beat one of the world’s elite group of Go professionals.
Go requires a different form of AI from chess. Again, here’s how Google explains it:
AlphaGo’s search algorithm is much more human-like than previous approaches. For example, when Deep Blue played chess, it searched by brute force over thousands of times more positions than AlphaGo. Instead, AlphaGo looks ahead by playing out the remainder of the game in its imagination, many times over – a technique known as Monte-Carlo tree search. But unlike previous Monte-Carlo programs, AlphaGo uses deep neural networks to guide its search. During each simulated game, the policy network suggests intelligent moves to play, while the value network astutely evaluates the position that is reached. Finally, AlphaGo chooses the move that is most successful in simulation.
Go AIs have used the Monte Carlo approach for a while now, but never on this scale.
There is a bit of hubris in Google’s presentation of this event:
We are thrilled to have mastered Go and thus achieved one of the grand challenges of AI.
It’s fair to say that they’ve done very well, but “mastered;” not quite. AlphaGo plans to take on Lee Sedol, the leading go player in the world, in March, to see if that claim is true.
Anders Kierulf, developer of go software, has written an article about AlphaGo vs Fan Hui, and also about the coming match against Lee Sedol. Anders’ conclusions are interesting:
- Fan Hui made a number of mistakes that Lee Sedol is unlikely to make.
- While AlphaGo played very well, it did make some mistakes in those five games. Also, Fan Hui did win two unofficial games against AlphaGo (sadly unpublished).
- AlphaGo’s reading (looking ahead many moves to determine whether a plan will work or not) is very strong.
- AlphaGo sometimes mimics the play of professional players and follows standard patterns that may not be optimal in that specific situation. Professional players are more creative and will vary their play more based on subtle differences in other parts of the board.
- AlphaGo may not have a nuanced enough understanding of the value of sente (having the initiative).
- AlphaGo doesn’t show deep understanding of why a move is played, or the far-reaching effects of a move.
And he points out what we don’t know about AlphaGo:
Ko was only played once; AlphaGo did well, but we don’t know how it will do in a complex, protracted ko fight. We don’t know how it will do when the fighting gets more complex. We don’t know how it will do when the board is more fluid and multiple local positions are left unresolved.
You can download a PDF from the British Go Journal with the game records and some commentary on the games, or see Anders Kierulf’s article for links to other commentaries. This program is clearly very strong, and will undoubtedly get better, but can it truly reproduce the creativity and intuition of top human players? Or is that a few more years away?
Do you want to learn how to play go? Check out Anders Kierulf’s SmartGo apps, which let you play games, save and analyze game records, and read go books on iOS devices. Read this Macworld article I wrote about a year ago about those and some other apps. I really hope that Google makes a limited version of this AI available so go players can try it out. Naturally, such a version wouldn’t be as strong – part of the strength of an AI is its ability to use a large number of processors – but it would be great to have a go app to play against that is good enough to help people learn to play better.