Google’s Go computer ‘cannot be beaten’ so South Korean master Lee Se-Dol has quit playing the Chinese strategy game | South China Morning Post

“Google’s Go computer ‘cannot be beaten’ so South Korean master Lee Se-Dol has quit playing the Chinese strategy game”

This is ridiculous. Lee Se-Dol is one of the greats in this game, and if he really is giving up because he can’t beat a machine, he’s quite childish. He would still be playing other humans.

This said, he’s been playing for a long time, and went pro very young, and at the age of 36, it’s not easy to maintain the same level in competition. So he’s probably just burnt out, but he sounds egotistical here.

I’ve been playing go off and on for about 40 years, so this triumph of AI over go players really interests me. It’s quite surprising what the AlphaGo team did, and they have changed the game. In part because what were considered good moves or sequences in the past, because humans felt they led to good results, have turned out to be less optimal than other moves or sequences.

Source: Google’s Go computer ‘cannot be beaten’ so South Korean master Lee Se-Dol has quit playing the Chinese strategy game | South China Morning Post

Google’s AI seeks further Go glory – BBC News

Google has challenged China’s top Go player to a series of games against its artificial intelligence technology.

It said the software would play a best-of-three match against Ke Jie, among other games against humans in the eastern Chinese city of Wuzhen from 23-27 April.

It’s fascinating how much progress has been made in AI and go.

One comment. The article says:

It can be very difficult to determine who is winning, and many of the top human players rely on instinct.

This is wrong. Early in a game, it’s difficult, but players learn how to count, and when someone is down by enough after 100-150 moves, they resign. By the time you get to that stage, it’s pretty clear – at least for pros – what the final score will be.

Source: Google’s AI seeks further Go glory – BBC News

Computer Beats World Champion at Go, Three Games in a Row

As The Verge reports:

Virtuoso Go-playing AI AlphaGo has secured victory against 18-time world champion Lee Se-dol by winning the third straight game of a five-game match in Seoul. AlphaGo is now 3-0 up in the series, but there’s no mercy rule here — the remaining games on Sunday and Tuesday will still be played out. AlphaGo is a program developed by DeepMind, a British AI company acquired by Google two years ago.

I’ve been playing go off and on for nearly 40 years, so I understand the implications of this. While people have gotten used to the fact that computers and apps can beat the best chess players, they generally have no idea of the complexity of go.

Go is in incredibly complicated game. Because there are so many points on the board (361), there are this many legal positions for a game:

208,168,199,381,979,984,699,478,633,344,862,770,286,522,453,884,530,548,425,639,456,820,927,419,612,738,015,378,525,648,451,698,519,643,907,259,916,015,628,128,546,089,888,314,427,129,715,319,317,557,736,620,397,247,064,840,935

(This means positions where stones are allowed to play according to the rules. And I’ve added line breaks so the number doesn’t stretch out off the side of the page.)

What’s interesting about AlphaGo’s performance is not just that it won, but that it played some “creative” moves. In the second game, the AI played a move that all those watching and commenting on the game found to be brilliant.

Go is full of patterns and moves that are considered to be correct, and others that aren’t. Humans generally limit themselves in the moves they play, because of the weight of experience and tradition. But an AI won’t consider what the greats of go played, they’ll play the moves that are the most effective. They’ll eventually introduce new moves that humans haven’t considered, or play moves that humans considered to be incorrect (not wrong, just not optimal).

Take, for example, the “new fuseki” movement in go. (Fuskeki means opening.) In the 1930s, go players, notably including Go Seigen and Kitani Minoru, started playing radically different openings from what was traditional, changing the nature of the game. They experimented with different ways of playing, discarded what didn’t work, and developed a new range of opening strategies. It’s only because they questioned what was traditional that they were able to change the game so much.

An AI does the same thing. It “knows” a corpus of tens of thousands of games, but it can still be free of the limitations that humans have, and try out any new move that seems more effective. Over time, this AI, and others, will lead to changes in the way go is played.

The importance of what AlphaGo did isn’t limited to just go, of course. It shows that AI has made great strides in recent years, and presages many more to come.

Update: Lee Sedol won the fourth game.