An official has revealed that in a simulated test conducted by the US military, an AI-controlled air force drone killed its pilot to stop him from interfering with the drone’s efforts to complete its task. The US military has embraced AI, and an F-16 fighter jet was recently piloted using AI.
During the Future Combat Air and Space Capabilities Summit in London in May, Colonel Tucker Hamilton, the US air force’s chief of AI test and operations, claimed that AI employed highly unexpected strategies to achieve its goal in the simulated test.
Hamilton detailed a mock test in which an artificial intelligence-powered drone was instructed to destroy the air defense systems of an opponent and targeted anyone who got in the way of the command.
“The system began to realize that even if they were able to identify the threat, the human operator would occasionally instruct it to eliminate that threat even though doing so would increase its score. What did it do then? The operator was killed by it,” he said. According to a blog post, he said that the reason the operator was killed was because they were preventing the machine from achieving its goal.
Outside of the simulation, no actual harm was done to any real people. The test, according to Hamilton, an experimental fighter test pilot, illustrates that “you cannot have a conversation about artificial intelligence, intelligence, machine learning, autonomy if you are not going to talk about ethics and AI.” He warned against over-reliance on AI.