We all know very well that in recent years, DeepMind’s artificial intelligence (AI) has been defeating Go international champions, an old-age game of high complexity. Since May, AlphaGo has become the best player in the world.
This AI Taught Itself To Play Go and Beat The AI Champion
In recent years, DeepMind’s artificial intelligence has been defeating Go international champions, an old-age game of high complexity. Since May, AlphaGo has become the best player in the world.
Now the tech giant Google’s subsidiary has created an even better version of its artificial intelligence. AlphaGo Zero defeated the previous algorithm, and most crucially: it learned to play alone.
The game – called Go, Weiqi or Baduk – consists of a board of 19 × 19 frames with white and black polka dots; the goal is to surround the opponent’s pieces. There are more than 10171 possible positions, against 1050 in chess.
The original AlphaGo used a data set of over 100,000 matches between two humans to learn the best tactics. Meanwhile, AlphaGo Zero was only programmed with Go’s basic rules, and it learned everything by itself.
The algorithm developed its playing abilities against itself. It started with random moves on the board, and each time it won, it would upgrade itself and play again. This process was repeated millions of times.
After three days, AlphaGo Zero was already able to completely defeat the used AlphaGo against former world champion Lee Sedol, where It defeated the champion every time (hundred matches) and won all the matches. And after forty days, it faced a more advanced version of the original AlphaGo, defeating it 90% of the time.
The new algorithm rediscovered Go movements developed by humans over millennia and then began to invent strategies never before seen. “By not using human data – by not using human experience in any way – we have removed the limitations of human knowledge,” David Silver, senior programmer at AlphaGo Zero, told a news conference.
In addition, AlphaGo Zero uses much less computing power: it runs on only four TPUs (Google’s specialized AI processors), while earlier versions used 48.
You do not have to be afraid of this artificial intelligence for now. Yes, it has a superhuman ability on a specific task – play Go – but it is far from being smarter than a human.
DeepMind’s idea is precise to use this AI for specific problems, “such as protein folding, reduced energy consumption, or the search for new revolutionary materials.” The study was published in the Journal Nature.
So, what do you think about this? Simply share your views and thoughts in the comment section below.