In a 20-page document published on November 26, 18 countries agreed that AI research and application companies need to develop and deploy the technology in a way that prevents customers and the general public from being abused, according to Reuters.
The agreement is non-binding and mainly makes general recommendations, such as monitoring AI systems to detect and prevent abuse, protecting data and vetting software vendors.
The development of AI leads to a race in the technology field as well as many concerns.
However, Jen Easterly, director of the US Cybersecurity and Infrastructure Security Agency, said it was important that so many countries shared the view that AI systems needed to put safety first.
“This is the first time we’ve seen countries agree that these capabilities are not just about the attractiveness of the features, how quickly we can bring them to market, or how we can compete to reduce costs,” Easterly told Reuters. The official said the guidance represents “an agreement that the most important thing to ensure at the design stage is security.”
The agreement is the latest in a series of initiatives by governments around the world to shape the development of AI, whose impact is increasingly felt across industries and society at large.
The document addresses questions about how to protect AI systems from hacker attacks and includes recommendations such as only releasing new models after thorough security testing. The new guidance does not address thorny questions around the appropriate use of AI or how to collect the data that feeds these models.
The rise of AI has raised many concerns, including fears that AI could be used to disrupt the democratic process, promote fraud or lead to massive unemployment, among other harms.
Europe is ahead of the US in enacting AI regulations. France, Germany and Italy have also recently reached an agreement on how to regulate the field, regarding the models that underpin AI.
The Biden administration has pressed lawmakers on AI regulation, but the deeply polarized Congress has made little progress in passing effective regulations.
The White House sought to mitigate risks from AI for consumers, workers and minorities, while bolstering national security, with a new executive order in October.
Source link
Comment (0)