Affiliate links on Android Authority may earn us a commission. Learn more.
YouTube now lets you take down AI content that mimics your face and voice
- YouTube has expanded its privacy request process to include AI-generated content.
- If a piece of content mimics your face or voice, you can request its removal from YouTube.
- YouTube will review each request manually and issue takedowns when faked content “could be mistaken for real.”
Over the past year, we’ve seen how tools like Midjourney can create misleading imagery and potentially sway public opinion. To address these concerns, YouTube is taking a stand with new measures to safeguard user privacy. The platform will now allow users to flag and request the removal of AI-generated content that mimics their face or voice. This policy covers fully synthetic recreations or partially altered content that could be mistaken for the real deal.
If you come across content that convincingly fakes your voice or face, YouTube has added a third option to its privacy complaint form that covers this scenario. Before this, you could only report videos that included your full name or sensitive information, like a residential address, without your consent.
It’s worth noting that YouTube will review each request carefully before taking action. A key factor will be the level of realism and the potential for misuse or manipulation.
The announcement also specifies that YouTube will consider whether the reported content contains “parody or satire when it involves well-known figures.” The platform could make an exception for high-profile individuals who are already in public discourse. This seems like a reasonable trade-off as long as the AI-generated content falls under the purview of social commentary and free speech. However, it’ll be interesting to see YouTube balance that aspect against a person’s reputational risk.
In March, YouTube began enforcing the use of disclosure labels for AI-generated content. When the label is applied, viewers see a small message that reads “Altered or synthetic content” along the bottom of the video, similar to a sponsorship disclosure. This only applies when videos contain a significant amount of artificially generated content, like if an AI voice generator is used for narration.
The latest announcement is yet another step to weed out potential misuse of AI on YouTube, which may become more rampant as video generators become more capable. OpenAI’s video demos of Sora earlier this year were exceptionally lifelike and its upcoming GPT-5 model will likely support video as an additional modality on top of text, images, and audio. Google also announced its competing Veo video generator last month and plans to integrate it into YouTube Shorts.