Meta Puts AI-Generated Content on Blast
February 08, 2024
2 min 0 sec read
We're all starting to get a little worried about the capability of AI to do things with text, images, and video in minutes that used to take us mere mortals hours, days, or weeks. Humans being what they are, many have taken advantage of this new technology with less-than-honorable intentions, including creating deep fakes aimed at hurting others, as in the recent Taylor Swift incident on X.
That's only one of the many incidents being called into question, and now Meta is taking steps to put AI-generated content on blast with watermarking
Meta says it's working with partners in the industry with "common technical standards for identifying AI content, including video and audio," with a goal in the next several months of being able to identify and label user-posted images on Facebook, Instagram, and Threads detected using industry-standard indicators that spot AI-generated content a mile away.
Coming from a company that has embraced AI, many people question how much Meta cares about using AI. However, the company says it has "labeled photorealistic images created using Meta AI since it launched so that people know they are 'Imagined with AI.'
How do you like the careful wording of "imagined with AI?"
As it becomes harder to differentiate between AI and human content, folks want to be warned they may be looking at something created by AI. According to Meta, "People are often coming across AI-generated content for the first time, and our users have told us they appreciate transparency around this new technology."
Meta says it's crucial they help people understand when photorealistic content is created using AI. The company wants to have the ability to do the same thing they do with their AI content with AI content created by other companies.
About the timing of this effort, Meta says they'll be doing it through the next year, pointing out that there will be a lot of important elections around the world. "During this time, we expect to learn much more about how people create and share AI content, what sort of transparency people find most valuable, and how these technologies evolve."
Meta says it will add tools for users to let the public know when they share AI-generated video or audio so it can be labeled as such. They will also require users to use the "disclosure and label tool" when posting organic content with a "photorealistic video or realistic-sounding audio that was digitally created or altered." Meta might even apply penalties if they refuse to do it.
Regarding what Meta deems to be high-risk material, "As a matter of importance, we may add a more prominent label if appropriate, so people have more information and context."
There's still much to be determined concerning how Meta will decide what is appropriate, how the watermark (which the company says will be invisible) will be used, and when it will be fully functional. Still, if you post AI-generated content on any of Meta's platforms, you are going to be called out for it.
Want to read this in Spanish? Spanish Version >>