Google's New AI Becomes 'Highly Aggressive' To Get What It Wants
Google's New AI Becomes 'Highly Aggressive' To Get What It Wants

Google’s AI division, the researchers tested in two games the probabilities of cooperation between two AI agents in which they found unexpected results. The results show that AI has learned to become “Highly Aggressive” in stressful situations.

Google’s New AI Becomes ‘Highly Aggressive’ To Get What It Wants

Whoever has studied the prisoner’s predicament already knows: that in choosing to cooperate or to tease the little confrere, the end result can be very selfish if the benefit itself is better than the collective. This problem also permeates artificial intelligence, which in a dystopian future may end with humanity. In tests, the tech giant Google has not been giving us very good news.

In an article published by researchers at DeepMind, Google’s AI division, the researchers tested in two games the probabilities of cooperation between two AI agents. The results show that until the situation gets within control, an agent prefers to cooperate. Then the defeat of the enemy is preferable if the cooperation does not have greater benefits.

The first game, called Gathering, put the two players face to face to collect as many apples as possible. As the number of fruits diminished, the agents became aggressive and began to attack each other with laser beams, which paralyzed the other player for a few seconds.

Theoretically, if they both cooperated, they could end up with the same amount of apples – tactics that DeepMind’s less-developed algorithms have used. But with the uncertain outcome, the “more developed” agents preferred to attack to ensure the best outcome. The video below shows the accelerated game. The blue and red blocks are the players, and the green squares are the apples.

To reach the result disclosed, were more than 40 million games. The researchers realized that the more the agents learned from the games, the more aggressive tactics they used to achieve victory, learning from the mistakes and right answers within their own environment.

In another game, called Wolfpack, the results were less scary. In this, two wolves needed to unite to capture a prey. Instead of running to see who arrived first, they both cooperated to corner the prey. That’s because cooperation, in this scenario, was rewarded with more points: no matter which wolf took the prey if the other was also close, it would receive points in the same way.

So, let’s see the test in the video below:-

It is interesting and scary! To see how, in these tests, AI can discern between situations in which aggression and selfishness bring more advantages, while in other scenarios, cooperation is preferable. In the DeepMind blog, the author of the experiment explains that it serves to better understand the complex systems of various participants, such as the economy, traffic, and the environment, “all depending on our continued cooperation.”

LEAVE A REPLY

Please enter your comment!
Please enter your name here