For years there have been chess programs that have beaten top players, but computers playing the game of Go haven’t fared so well – until now. Check out this article.
This morning, Nature published a paper describing DeepMind’s system, which makes clever use of, among other techniques, an increasingly important AI technology called deep learning. Using a vast collection of Go moves from expert players—about 30 million moves in total—DeepMind researchers trained their system to play Go on its own. But this was merely a first step. In theory, such training only produces a system as good as the best humans. To beat the best, the researchers then matched their system against itself. This allowed them to generate a new collection of moves they could then use to train a new AI player that could top a grandmaster.
Of course it’s much more fun playing against a human, isn’t it? My first choice is to play an opponent over the board. There are lots of people who enjoy playing live games over the internet on various Go servers. I suppose the good thing about that is it exposes you to a lot of players, different styles and various strengths.
For those not familiar with the game of Go, it is played with black and white stones on the intersections of a 19X19 line grid. Players alternately lay down a stone at a time with the objective of surrounding more territory than one’s opponent. The rules are simple, but it is a very complex and challenging game to play, and the learning curve it pretty steep. It takes some commitment to become a Go player.