Google AI introduced machine learning-based systems to quickly and efficiently train game-playing agents, enabling developers to find bugs swiftly. This is done by combining a high-level semantic API (Application Programming Interface) with an interactive training flow that allows the training of helpful ML policies for video game testing.
Traditional video game testing involves playing the game for a long time to detect bugs, and ML systems can “just play the game” at scale. This was done by employing game-testing agents, which could effectively accomplish tasks of a few minutes rather than having a system that can play end-to-end. This enables the developers to test large stretches of gameplay by combining ML policies with a small amount of simple scripting.
The tricky part with employing ML in game development is bridging the gap between the simulation-centric world and the data-centric world. To bring them together, developers are provided with an idiomatic, game-developer friendly API, which lets them describe the semantic action and the game from the player’s point of view in the developer language itself.
To translate the simulation into data, the API has to be converted into neural networks. The API is designed to adapt to the specific game being developed, the specific combination of API building blocks incorporated by the game developer lets the system choose the network architecture.
Once the neural network architecture is generated, the network needs to be trained to play the game using learning algorithms. Rather than using the traditional reinforcement learning method, Google used imitation learning (IL) inspired by the DAgger algorithm, which trains the ML policies by observing experts play the game. Google has also released an open-source library that demonstrates a functional application of these techniques.