Instagram and Facebook to Notify Users in India about the Authenticity of Image or Video Ads, Detecting Deepfakes

Technology

In the aftermath of the widespread coverage surrounding the Katrina Kaif and Rashmika Mandanna deepfake incident, Meta has introduced a new policy aimed at combating deceptive content generated by deepfakes. However, as of now, the policy primarily focuses on advertisers producing digitally manipulated or altered posts related to political and social issues

Deepfakes have become a prominent and concerning topic of discussion, particularly in India. The conversation gained momentum following a recent deepfake case involving a viral fake video of Rashmika Mandanna, followed by one featuring Katrina Kaif. This issue extends beyond India and the realm of movie stars, posing concerns for political and social matters, especially with major democracies like India, the US, and the UK gearing up for general elections in 2024. In response, Meta has unveiled a policy mandating that advertisers on Instagram and Facebook disclose the use of convincingly realistic images, videos, or authentically-sounding digitally generated or modified audio in ads related to social issues, elections, or politics.

For those unfamiliar, deepfake content involves the use of artificial intelligence to create or manipulate videos or images, replacing a person’s likeness with someone else’s in a convincing manner. Unlike edited or “photoshopped” content, deepfakes leverage technology to produce realistic-looking material, raising concerns about the potential for spreading false information or fabricating misleading scenarios. Understanding the technology’s impact on the authenticity of visual and audio content online is crucial.

To curb the dissemination of misinformation, Meta’s new policy mandates notifications to users whenever an ad undergoes digital alterations or modifications. The disclosure will be visibly presented on the ad itself. Meta reserves the right to cancel non-compliant ads and may impose penalties on advertisers persistently failing to adhere to disclosure requirements.

The policy, effective from January 2024, applies globally, and Meta clarifies that advertisers need not disclose minor alterations, such as changes in size, image cropping, color correction, image sharpening, or other inconsequential changes. In addition to advertisers, Meta already enforces policies for all users regarding the use of deepfake videos on Instagram and Facebook. Violations, such as presenting misinformation as fact or truth, are subject to platform penalties. This new policy essentially extends the existing guidelines, officially holding advertisers accountable for deepfake content violations.

In a similar vein, Google announced a comparable policy in September, requiring advertisers to inform users when an image or audio has been created using artificial intelligence. This policy came into effect in November 2023.

Leave a Reply

Your email address will not be published. Required fields are marked *