Affiliate links on Android Authority may earn us a commission. Learn more.
YouTube tells creators to label videos that have AI-generated content
- YouTube has introduced a new guideline requiring content creators to label realistic content created with AI.
- The label will either appear on the video player or the expanded description, depending on the sensitivity of the topic.
- The requirement does not apply to content that is unrealistic, animated, includes special effects, or has used generative AI for production assistance.
With the widespread use of AI, it has become increasingly difficult to tell the difference between real and technically manipulated content. To fight potential disinformation, YouTube is implementing a new policy requiring content creators to label altered or synthetic content.
YouTube announced it is now requiring creators to disclose whether realistic-looking video content was made with AI tools. Under the new guidelines, content that uses the likeness of real people, altered footage of events and places, or generated realistic scenes should be accompanied by an ‘altered or synthetic content’ label to avoid misleading users.
YouTube further clarifies that on highly sensitive topics, the label will be prominently displayed on the video player. This includes content around health issues, ongoing conflicts, elections, finance, etc. On the other hand, for lower-impact content, the label will appear in the expanded description.
The use of AI in the creativity process will not always necessitate a disclosure, as further noted in the statement. For instance, the requirement does not apply when AI is used in script development or idea generation. In addition, the label won’t be required for certain types of alterations (e.g. clearly unrealistic content like the use of imaginary figures, the use of special effects, and applying beauty filters).
YouTube has no specific enforcement measures at the moment and appears to be keen on letting content creators acclimate to the new rules. However, YouTube warns that in the future, continued non-compliance with the new guidelines could cause the removal of the videos, account suspension, and other penalties like potential demonetization.
Furthermore, it should be noted that the new guideline does not provide leverage against existing rules. For example, content depicting realistic violence will still be removed despite the creator labeling it as synthetic.
You should see the new guideline take effect on your smartphone app first, with it rolling out to the desktop and TV soon after. Thereafter, YouTube promises to update privacy policies that allow the request for removal of AI-generated content that uses people’s faces or voices as part of their continued commitment to responsible use of AI.