The US Air Force on June 2 denied reports that it had conducted a simulation in which an unmanned aerial vehicle (UAV) under artificial intelligence (AI) decided to “kill” its human operator to prevent him from interfering with its mission, according to USA Today.
Speculations began last month when, at a conference in London, Colonel Tucker Hamilton said an AI-controlled UAV used “extremely unexpected strategies to achieve its objectives.”
A US Air Force MQ-9 Reaper drone at Kandahar Air Base in Afghanistan in 2018
According to him, the test simulated an AI-operated UAV being ordered to destroy an opponent's air defense system. Then, when the operator ordered it to ignore the target, the UAV attacked and killed the operator because the operator interfered with its primary target.
However, no one was actually attacked like that. In an interview with Business Insider , US Air Force spokeswoman Ann Stefanek said no such simulation was conducted.
Former Google director's scary warning: AI has the ability to 'squeeze out' humanity
“The Air Force has not conducted any such activities and remains committed to the ethical and responsible use of AI technology,” the spokesperson said, noting that “it appears the colonel’s comments were taken out of context.”
Following the US Air Force statement, Mr. Hamilton clarified that he had “misspoken” in his presentation in London. According to him, the simulation was just a hypothetical experiment. The Royal Aeronautical Society (UK), which organized the conference, did not respond to requests for comment.
Source link
Comment (0)