On Jul 1, 2022 DeepMind posted a tweet of their newly trained model-free multiagent reinforcement workflow, DeepNash, that learned to play the Stratego game. Stratego is a board game that challenges players to handle various army ranks to defeat opponents and capture their flags. This autonomous agent can understand the Stratego game from scratch to a human expert level.
Among the different board games available, Stratego is one of the few that artificial intelligence (AI) has not been able to master yet. This popular game has a vast game tree order of 10535 nodes, i.e., 10175 times bigger than that of Go, an abstract strategy board game.
There is an additional complexity involving decision-making similar to Texax hold’em poker. However, the latter has a relatively smaller game tree order of 10164 nodes. The decisions made in Stratego are caused due to a large number of actions, and there is no evident correlation between the action and the result.
Read more: Siemens Launches Xcelerator, an AI-enabled Open Business Platform, Unveils Building X with NVIDIA
Each game is long and consists of multiple moves before a player wins. In some cases, Stratego cannot be broken down easily into manageable sub-problems. Due to these reasons, it has been a massive challenge for artificial intelligence to crack, and it can hardly clear the amateur level of Stratego.
On the other hand, DeepNash by DeepMind uses a game-theoretic, model-free seep reinforcement algorithm that has mastered the Stratego game through self-play. The Regularised Nash Dynamic (R-NaD) algorithm is the essential component that converges to an estimated Nash equilibrium by instantly changing the underlying multi-agent learning dynamics.
DeepNash exceeds the current artificial intelligence strategies in the Stratego game and is the all-time top 3 on the Gravon games platform, contesting with human expert players.