Dubbed AlphaZero, the new system began its life last year by beating a DeepMind system that had been specialised just for Go.
According to IEEE Spectrum: "That earlier system had itself made history by beating one of the world's best Go players, but it needed human help to get through a months-long course of improvement. AlphaZero trained itself -- in just three days."
According to the journal Science, which we get for its spot the spherule competition, DeepMind's David Silver said that AlphaZero could crack any game that provides all the information that's relevant to decision-making.
DeepMind developed the self-training method, called deep reinforcement learning which it worked out to attack Go.
Today's announcement that developers have generalized it to other games means they were able to find tricks to preserve its playing strength after giving up certain advantages peculiar to playing Go.
The biggest such advantage was the symmetry of the Go board, which allowed the specialised machine to calculate more possibilities by treating many of them as mirror images.
The researchers have so far unleashed their creation only on Go, chess and Shogi, a Japanese form of chess. Go and Shogi are astronomically complex, and that's why both games long resisted the "brute-force" algorithms that the IBM team used against Kasparov two decades ago.