DEV Community

Cover image for AlphaGo: Observations about Machine Intelligence
Nested Software
Nested Software

Posted on • Edited on • Originally published at nestedsoftware.com

AlphaGo: Observations about Machine Intelligence

DeepMind and AlphaGo

I enjoy playing the game of go (not to be confused with the programming language). It's also known as baduk in Korea and weiqi in China. In the last several years, DeepMind has made a profound revolution in the world of go and AI with AlphaGo. Prior to AlphaGo, the best go AIs, based on MCTS or Monte Carlo tree search, were relatively weak. A strong amateur player could beat them, and they stood no chance against professional players.

DeepMind's AlphaGo, using deep learning, changed all that. In January 2016, DeepMind released the news that AlphaGo had trounced the retired Chinese professional player, Fan Hui, in a series of even games. The go community had thought such a milestone was still about 10 years away at the time.

Later, in the Spring of 2016, AlphaGo defeated one of the world's best active players, Lee Sedol, 4-1 in a 5 game series. In May of 2017, an even stronger version of AlphaGo beat the world #1 player Ke Jie 3-0 in a 3 game series. A similar version, playing several months earlier under the moniker of Master, had defeated all of the world's top professionals 60-0 in games that were played online with fast time controls.

AlphaGo Zero

The most interesting development perhaps came afterward though. These earlier versions of AlphaGo used neural networks to learn how to play go, but their ideas of what represented good moves were influenced by human games that were used to bootstrap the network's training.

AlphaGo Zero, described in a Nature paper in the Fall of 2017, learned how to play go entirely on its own without using any human games, just by playing against itself. It started off with random moves and quickly became superhuman (with an ELO of about 4500) after only 3 days of training. Afterward, DeepMind trained it from scratch again, this time for 40 days, producing a AI with an estimated ELO of over 5000. For comparison, the top human players have an ELO of about 3600.

Also, earlier versions of AlphaGo did have several heuristics coded into the AI. For example, there is a concept known as a ladder in go. Reading out ladders was hand-coded into earlier versions. All such heuristics were removed from the Zero version, so it had to learn everything about go besides the rules entirely by itself.

A ladder is a situation where you can chase a group diagonally across the board. If there is a friendly piece (known as a stone) in the right spot on the other side of the board, then the group can't be captured. If there isn't one, then it can. This is something that beginner players learn about almost right away after learning the rules of go, but it turns out it's not easy for the AI to learn this concept on its own.

AlphaGo Zero exceeded the capabilities of all previous versions of AlphaGo after 40 days of training. It is now widely regarded to be the strongest go AI in the world, significantly stronger than any human player.

This was a huge achievement in AI. While brute-force computation combined with human-tuned heuristics were sufficient for chess AIs to become unbeatable, go's high branching factor and whole-board strategic framework made it impossible to create a really strong AI using rules or strategies pre-defined by human beings.

The pattern recognition of deep neural networks and the immense parallelism afforded by big data to train the network were both needed to finally crack the problem.

While the resources needed to train the neural network were enormous (thousands of years worth of computing time for a single PC), once trained, far fewer computations were needed to actually play. From DeepMind's original paper: During the match against Fan Hui, AlphaGo evaluated thousands of times fewer positions than Deep Blue did in its chess match against Kasparov; compensating by selecting those positions more intelligently

Human vs. Machine Learning Styles

DeepMind's paper about AlphaGo Zero made mention of something interesting: Ladders, which human beginners learn about when they first start to play, are something that AlphaGo Zero only digested much later in its self-played games. It's unclear when this actually happened. Here's what DeepMind's paper has to say about it: Surprisingly, shicho (β€œladder” capture sequences that may span the whole board) -- one of the first elements of Go knowledge learned by humans – were only understood by AlphaGo Zero much later in training

It's a bit maddening that the paper doesn't say just how long it took Zero to figure out ladders, but let's assume it was about half-way through its initial training run of 3 days. If that's the case, Zero's ELO would already have been in the neighbourhood of 3000. That would place it among the top 500 or so professional players in the world.

This kind of phenomenon, where the AI's overall performance is very high, but it has blind spots for things that would be obvious to a human being, is important. If we're developing life-critical AI applications, we may think our AI has achieved exceptional performance, but it could still be vulnerable to occasionally making fairly trivial mistakes. The way that AI learns, at least for the time being, is very different from the way human cognition works.

DeepMind retired AlphaGo soon after publishing their paper about Zero, so we don't know what other weaknesses might still be found as edge cases. However, there are several projects currently working on implementing the Zero architecture. It may take a bit longer, but eventually we should be able to study an AI with the same strength as AlphaGo Zero in more detail.

Importance of Intuition and Experience

Another thing that struck me in DeepMind's paper about Zero is the importance of intuition and pattern recognition. In DeepMind's paper, they show that a version of Zero that used only the neural network and did not try to read out any variations at all in the game still had an ELO of about 3000! That means, on average, it was able to play go at a professional level with pure pattern recognition!

As we age, our ability to calculate rapidly and precisely declines, but this shows us the immense power of experience. Zero can play professional-level go, far beyond the level of almost any amateur player, and better than many professional players who trained full time starting at about age 8, without reading out any sequences at all. It just uses its equivalent of intuition to decide where the most important-looking place to play is.

To me that's really astounding. I believe it has a lot to say about the importance of wisdom and experience for us human beings also.

The Mind of a Novice

One last observation I'd like to share is about the difference in strength between Zero, which did not use any human games for its training, and Master, which had the same basic design but was trained with human games. In the end, Zero continued to improve significantly after Master's improvement seemed to level off (slightly below 5000 ELO). This suggests that the human games inhibited Master from exploring some valuable ideas.

Indeed, Zero has revolutionized our ideas about how to play go. Many ways of playing that would have been immediately shut down by human professionals in the past are now being seen in a new light. As human beings, we too have to remain open to new ideas, and to try to push ourselves to explore more deeply, even when something may seem obviously foolish at first.

Zero has proved that the accepted wisdom of even the top experts in a given field can and should be questioned. We should always approach any subject with an open, curious mind, even if it challenges our preconceived notions.

Also, if we use data produced by human experts when applying AI to other problems, it's worth keeping this kind of limitation in mind.

Conclusion and Caution

Before concluding this article, I think it's important to note that while DeepMind's achievement with AlphaGo is amazing, go remains a much more tractable problem than the open-ended problems of the real world. Go, like chess, is a zero-sum game played on a finite board: There's always a winner and a loser at the end of a game. Go, like chess, is also a game of perfect information: Both players have access to the same board position and there is nothing hidden or random (like there is in poker for example).

In the real world, totally unpredictable things can happen. Taking self-driving cars as an example, a pedestrian or another car can get in the way without warning. A sudden weather event could disrupt sensors. Construction crews may block off a section of road. It's very hard to anticipate all of the things that can happen and what an AI should do in response.

There are also legal and ethical questions: If a car's AI has to swerve into some pedestrians, potentially injuring or killing them, to save the driver, should it do so? And if it does, can the manufacturer get sued? I think it would be very difficult to sue a human driver in such a circumstance, but the AI's behaviour is mediated by software and may be subject to different legal standards.

Of course there is also always the potential problem of hacking.

I guess the bottom line is just that we have to be aware of how complicated the real world is and not to get too easily seduced by the achievements of AI in much more controlled settings (however impressive those achievements may be).

Links

If you're interested in AlphaGo, consider checking out:

Some 3rd-party implementations of DeepMind's AlphaGo Zero architecture:

If you're interested in go more broadly:

Top comments (12)

Collapse
 
peter profile image
Peter Kim Frank

I watched a movie inspired by AlphaGo (how the film gets its title) which was a great introduction and just a really beautiful story of human achievement against the new crop of machine talent. Pretty sure it's available on Netflix β€” I highly recommend it.

Thanks for the great article.

Collapse
 
nestedsoftware profile image
Nested Software

Yes - it's a great doc. I'm pretty sure Lee Sedol is somewhat of a household name in Korea, China, and Japan, even among people who don't play go at all themselves. It was really cool that this documentary brought some knowledge of this legendary player to a wider audience in the West too.

Collapse
 
rmarpozo profile image
RubΓ©n MartΓ­n Pozo

I really liked it. It's a great story about what humans are able to achieve and how humans react to these achievements

Collapse
 
ben profile image
Ben Halpern

AlphaGo Zero, described in a Nature paper in the Fall of 2017, learned how to play go entirely on its own without using any human games, just by playing against itself.

I feel like this all got completely overlooked by mainstream reporting. Bravo on this post.

Collapse
 
nestedsoftware profile image
Nested Software

Thank you!

Collapse
 
jillesvangurp profile image
Jilles van Gurp

I tend to think about ethics in terms of risk mitigation and ass coverage. Ethical reasons are great excuses to do nothing and avoid being liable for potentially harmful effects. But usually that's just delaying the inevitable. Self driving cars causing accidents is not an ethical problem but simply a legal and practical challenge. The math is brutally simple though, as soon as AI cars cause less traffic deaths than distracted, drunk, or otherwise incompetent humans driving cars, it's a ethically the right thing to drive them. Tesla marketing material suggests that they clearly believe that has already happened. However, the legalities, liability and moral responsibility when the inevitable deaths occur with self driving cars is still worth debating. It's just not a reason to not work on self driving cars. Rather the opposite.

Humans are funny when it comes to risk assessment. Probably if everyone were to drive self driving cars in their current state, there would be a massive reduction in traffic deaths followed by a rapid further reduction as the few accidents that still happen due to bugs, glitches, and other issues get addressed. Most traffic deaths are caused by humans. Fundamentally, things are quite safe already with self driving cars. However, we're stuck with overly conservative bureaucrats holding the industry back with their insistence on ass coverage and a legal climate that is leading vendors to prefer to not be ever liable for anything because of the financial risks of class action suits. So, we're literally killing people by exposing them to human drivers. Is that ethical or stupid?

Collapse
 
rrampage profile image
Raunak Ramakrishnan

Thanks for this very well written article! I am checking out the references at the end.

A minor point:

It started off with random moves and quickly became superhuman (with an ELO of about 4500) after only 3 days of training.

The number of days is probably not a good metric to judge the speed of training. It played around 5 million games against itself during those 3 days. So, it is an order of magnitude greater than even the most experienced human player.

Collapse
 
nestedsoftware profile image
Nested Software

That's a really good point. It's easy to overlook how much processing power is involved in training the network. I'm also really impressed by how DeepMind were able to break the problem down into tasks that could be massively distributed across processing units in parallel.

Collapse
 
rhymes profile image
rhymes

Thanks for the interesting article!

It's interesting how they created an unbeatable AI...

ps. Golang should have chosen a different name :D

Collapse
 
nestedsoftware profile image
Nested Software

Thank you. I agree about golang! πŸ˜‰

Collapse
 
lschultebraucks profile image
Lasse Schultebraucks

Great post, super interesting topic. Comparing human intelligence and artificial intelligence is super cool, every time i do it and every time i read about it i find new differences between these.

Collapse
 
nestedsoftware profile image
Nested Software

Thank you!