How AI and gaming are becoming the new frontier of innovation
In 2017, a poker bot developed by a team from Carnegie Mellon University became the first AI bot to beat four world-class poker players in No Limit Texas Hol’ Em. This bot was named “Libratus,” based on the Latin term for “balance.” Libratus became a technological sensation, using AI to become a top poker player. Libratus’ technology even caught attention from the U.S. military, and in 2019 it was sold to the U.S. government for an estimated $10 million contract. The success and application of Libratus is just one example of AI taking over different games in the last decade. In the past 10 years, AI and bots have entered various industries, with gaming be a top destination. Games ranging from simple chess matches to complex multiplayer arena games have been taken over by artificial intelligence.
In the early days of gaming with games such as Pong, Tetris, and Space Invaders, beating a computer in your favorite video game could be nearly impossible. Many of these games were created within a closed system with bots having perfect information on the game state. As a result, bots could play with godlike abilities and dominate any opponent. But as games advanced, bots did not. We were introduced to more complex games, like poker, that contained more variables and outside information that a bot could not control. In a game state with multiple players and hidden information, bots were extremely disadvantaged. But in recent years, bots have been greatly innovated to allow them to thrive in these complex games. AI is rapidly being introduced as a way for bots to better adapt to outside information and once again rule their games.
How the future beats the past
Since its earliest days of AI, games have a been a popular path for AI programs and self-learning applications. Today, we are seeing some of the most innovative technologies being used to completely dominate retro and old school games. One of the most well known AI programs in gaming was created by Google’s subsidiary DeepMind. DeepMind first started with creating AlphaGo, an AI bot used to play, Go, a Chinese board game. Go was invented over 2,000 years ago, and is the oldest game in the world that is still played in modern day. In 2016, DeepMind created AlphaGo, an AI bot that used complex algorithms and self-learning to beat one of the top Go players in the world, becoming one of the first instances in which an AI bot was able to beat a top-class player. But the AlphaGo program didn’t stop at Go.
AlphaGo has also been applied to other multiplayer games such as chess. Now called AlphaZero, the Google program used the same self-learning techniques as its Go counterpart and learned to play chess in just four hours. AlphaZero was then put up against Stockfish 8, the world champion chess program. Other than chess, AlphaZero was also able to play other board games such as shogi, learning the game rapidly and then beating the top shogi program, Elmo. AlphaZero incredible feats was able to takeover and dominate games all on its own without human assistance. In every game it played, the AI technology was only given the basic rules of each game. After playing the game for a couple hours, the AI was able to reinforce its learning, becoming better and better with every match.
Along with older games, retro games like Breakout, Pong and even Space Invaders became the next targets for AI. Retro games were just as easy if not easier to conquer for AlphaGo. After only a couple hours of playing both these games, AlphaGo was able to achieve godlike skills.
When the future is present
In most recent news, DeepMind and other AI programs have been logging hours into more complex and modern games. Some of the top destinations being the games Starcraft II, DOTA 2, and especially online poker. One of the biggest challenges for AI has been overcoming the computational power and resources needed to play complex games. In the past, AI programs applied to large multiplayer strategic games require a lot of energy. Especially with playing live against humans, AI programs must create real-time decisions and calculate endless strategies in a matter of seconds. The computational resources needed to play complex games is astronomical, and it is even harder for the programs gain a grasp on.
This all changed with the creation of DeepMind’s Starcraft program, which they named AlphaStar. In Starcraft, the main objective is to harvest materials raise different armies and troops.These troops can then be used to attack other players until one is left standing. DeepMind’s goal with their new program was to innovate AI programming to overcome these previous challenges. AlphaStar was able to climb up hills deemed too steep for AI, and became one of the top Starcraft II players in the world. It started by playing and beating professional player Dario “TLO” Wünsch, and then moving on to its next human opponent, MaNa. AlphaStar was unlike its predecessors, being able to handle the computational power with ease as it stomped its human enemies. Unlike human players, AlphaStar moved with precision, making quick and decisive actions that many human players are incapable of. AlphaStar was hailed for its “micromanagement” of every part of its arsenal, being able to keep track of all its troops to conquest.
Alongside the AlphaStar AI program is the OpenAI’s DOTA 2 AI bots. OpenAI is a Elon Musk start-up that focused on creating an AI program to conquer the DOTA 2 world. Similarly to Starcraft II, DOTA 2 is an online multiplayer game that in the past, has been a challenge for AI to compete. In April 2019, OpenAI held their Final Five competition, matching up their AI program against e-sports team OG, the world champions in DOTA 2. Similarly to AlphaStar, OpenAI used a version of reinforced learning to program its AI. Rather than coding it to use certain strategies, their AI program was coded to learn and become better overtime. Because it was a program, they were able to play in an accelerated environment, playing up to 180 YEARS A DAY in order to prepare against its human opposition. The bots would continuously get smarter and better as it played. During the competition, OpenAI was able to optimize the use of AI and self-learning to become gods against human players.
More than a Game
Although it may seem trivial for a bot to beat a human in a game, what is significant, is the type of games. For games like chess and checkers, losing to a bot is not very surprising. They can calculate any of your moves and create counter attacks that out maneuver any strategy. However, machine learning bots have been able to beat even the best human players in more complex games such as poker. Games like poker contain endless amounts of incomplete data. Compared to chess, bots do not have access to any of the other players cards or any of the cards in a deck. Therefore, bots have to create game plans with the little information they have.
But bots aren’t helpless. Unlike conventional bots, Machine Learnings bots are able to estimate and learn the actions of their human opponents as well as estimate the outcome of certain moves. Because of the ability to learn, more advanced bots are able to solve these games better than humans ever could. So why does this matter? The ability to beat humans in games by itself may not be very significant, but the algorithms and data behind these bots is what matters. Proving that bots have the ability to learn and become better at such complex activities is the first steps for creating more and more powerful AI.
What’s next for AI in gaming
AI continues to be a large player in gaming and it’s not going to stop anytime soon. Gaming has been a huge and integral part of developing AI. In doing so, AI is allowed to innovate on its own, teaching itself new strategies and methods to optimize gameplay. In recent years, AI has moved from basic retro games to now complex multiplayer games. Innovations in AI has allowed it to account for infinite variables, and then making split second decisions with ease.
Poker has become one of the topic testing sites for bots for one particular reason: imperfect information. Unlike retro games and even multiplayer games like DOTA 2, poker provides very little information for AI to work with. An AI program in a poker match knows nothing about the opponent’s cards and only have their cards and the ones of the table visible. In poker, there are infinite variables that AI must account for. With so little information, AI developers are now focusing on creating algorithms that develop successful strategies with little and incomplete data sets. By continuously improving AI particularly in gaming, these innovative technologies can then be applied to other fields and industries similarly to Libratus.
With the surge in AI and machine learning, many people are entering the field without having much experience. Developing gaming bots is a fun and engaging way for new AI enthusiasts and developers to gain experience. Even for veteran developers, creating a machine learning-powered bot is a great way to test out new algorithms and theories. As AI continues to venture into different industries, gaming can be the next “Wild West” for AI development.