Nick Clegg, president of global affairs at Meta, said the company will use a set of markers built into the files. The company will apply labels to any content posted to its Facebook, Instagram, and Threads services, signaling to users that images (which may appear to be real photos) are actually digital creations generated by artificial intelligence. The company has also labeled content created with its own AI tools, according to Reuters.
Once the new system is up and running, Meta will do the same for images generated on services from OpenAI, Microsoft, Adobe, Midjourney, Shutterstock, and Alphabet. The announcement provides the first news of a new standard that tech companies are developing to mitigate the harms of generative AI, which can create fake content from even simple content.
AI-generated image labeling will help curb misinformation and scams
The approach builds on a pattern established over the past decade by companies to coordinate removal of banned content on platforms like depictions of mass violence and child exploitation.
Clegg believes companies can reliably label AI-generated images at this point, while noting that more sophisticated audio and video content labeling tools are still in development.
Meta will begin requiring users to label altered audio and video content and will impose penalties if they don't, but Clegg said there's currently no viable mechanism for labeling text generated by AI tools like ChatGPT.
Meta's independent oversight board has criticized the company's policy on misleadingly edited videos, saying content should be labeled rather than removed, so Clegg said Meta's new move could help better classify such content.
Source link
Comment (0)