The story of US Air Force Colonel Tucker Hamilton, in which an AI-controlled drone decided to eliminate the operator during virtual tests, contains inconsistencies that cast doubt on the truth of the incident. The journalist of the portal writes about it Gizmodo Luke Ropek.
“The simulation went awry, but what does that mean? What was this simulation? Which AI program exactly failed? Was it part of a government program? None of this was clearly marked as the story had a dramatic narrative with clearly vague details,” he noted.
According to Ropek, soon after, a representative from the US Air Force published an official refutation of Hamilton’s words. “The Air Force has not conducted AI drone simulations like this and remains committed to the ethical and responsible use of AI technology. Air Force spokesperson Ann Stefanek said the colonel’s comments seemed to have been taken out of context.
After that, Hamilton also changed his original story, arguing that it was a “thought experiment” and not a real simulation. “Although this is a hypothetical example, it illustrates real problems with AI capabilities,” the colonel added.
According to Ropek, Hamilton’s story was likely embellished and eventually spread through the media and presented as if it were a real situation. But the journalist accepted another version.
“The alternative version suggests that this really happened and perhaps now the US government doesn’t want everyone to know that they are one step away from bringing Skynet down on the world. But there is no real proof of this yet,” he concluded.
One of Hamilton’s earliest histories quotation British newspaper The Guardian spoke as a US Air Force colonel at a defense conference in London. According to him, the artificial intelligence inside the drone saw the human operator as a threat to the mission and eliminated him in virtual reality.
The article was replaced with the caption “US Air Force Refuses to Start Simulation where AI Drone ‘Kills’ Operator”.