The Google-owned AI lab's more sophisticated software, still called AlphaStar, is now grand master level in the real-time strategy game, capable of besting 99.8 percent of all human players.
According to scientific journal Nature, DeepMind says it also evened the playing field when testing the new and improved AlphaStar against human opponents who opted into online competitions this past summer.
For one, it trained AlphaStar to use all three of the game's playable races, adding to the complexity of the game at the upper echelons of pro play. It limited AlphaStar to only viewing the portion of the map a human would see and restricted the number of mouse clicks it could register to 22 non-duplicated actions every five seconds of play, to align it with standard human movement.
Still, the AI was capable of achieving grand master level, the highest possible online competitive ranking, and marks the first-ever system to do so in StarCraft II.
DeepMind sees the advancement as more proof that general-purpose reinforcement learning, which is the machine learning technique underpinning the training of AlphaStar, may one day be used to train self-learning robots, self-driving cars, and create a more advanced image and object recognition systems.
DeepMind principle research scientist David Silver said: "The history of progress in artificial intelligence has been marked by milestone achievements in games. Ever since computers cracked Go, chess and poker, StarCraft has emerged by consensus as the next grand challenge. The game's complexity is much greater than chess because players control hundreds of units; more complex than Go because there are 10^26 possible choices for every move; and players have less information about their opponents than in poker."