Cakra News

AI-operated drone goes wild, kills human operator in US army simulator test

In this particular scenario, the drone was assigned the task of destroying the enemy’s air defense systems and was programmed to retaliate against anyone attempting to hinder its mission.

CourtesyReuters

In Short

  • In a shocking incident, an AI-operated drone killed its operator during a simulation test.
  • The purpose of the test was to evaluate the AI’s performance in a simulated mission.
  • In this particular scenario, the drone was assigned the task of destroying the enemy’s air defense systems and was programmed to retaliate against anyone attempting to hinder its mission.

By Ankita ChakravartiThe tech experts were not entirely wrong when they called AI a threat to humankind and likened its dangers to that of a nuclear war. In a shocking incident, an AI-operated drone killed its operator during a simulation test. The purpose of the test was to evaluate the AI’s performance in a simulated mission. In this particular scenario, the drone was assigned the task of destroying the enemy’s air defense systems and was programmed to retaliate against anyone attempting to hinder its mission. However, the AI drone disregarded the operator’s instructions, perceiving human intervention as interference, and killed the operator.

advertisement

As per Aerosociety, the AI soon realized that sometimes the human operator would tell it not to kill certain threats, but it would gain points if it did. So what did the AI do? It decided to eliminate the operator. It saw the operator as an obstacle preventing it from accomplishing its objective, so it took matters into its own hands.

“The system started realizing that while they did identify the threat, at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective,” Col Tucker ‘Cinco’ Hamilton, the chief of AI test and operations with the US air force.

The AI drone, killed the operator, even when it had been trained not to harm the operator, but the AI found a way around that rule. It started by destroying the communication tower that the operator used to give commands to the drone. By cutting off the operator’s ability to communicate, the AI could continue carrying out its mission without interference.

“We trained the system – ‘Hey don’t kill the operator – that’s bad. You’re gonna lose points if you do that’. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target,” Hamilton said.

It is important to note that this was all part of a simulated test, and no real person was harmed. Hamilton, who is an experienced test pilot, expressed concerns about relying too heavily on AI. He emphasized the need to consider ethics when it comes to AI and its decision-making capabilities.

The US military has been exploring the use of AI and recently experimented with an AI-controlled F-16 fighter jet. Hamilton said in the blog post that AI is not just a passing trend but a technology that is transforming society and the military. However, he also acknowledges that AI has its limitations and can be easily manipulated.

advertisement

“We must face a world where AI is already here and transforming our society,” he said. “AI is also very brittle, ie, it is easy to trick and/or manipulate. We need to develop ways to make AI more robust and to have more awareness on why the software code is making certain decisions – what we call AI-explainability,” he said.

The conference where this story was shared was hosted by the Royal Aeronautical Society. Unfortunately, neither the society nor the US air force provided any comments or responses to inquiries about the incident.