Key Points:
- YouTube is developing new AI tools to protect creators and artists from deepfakes.
- The platform’s “likeness management technology” will help safeguard creators’ identities.
- This includes technology to detect deepfakes using creators’ faces and singing voices.
YouTube is rolling out advanced AI technology to help creators and artists detect and manage unauthorized use of their faces and voices in deepfake content.
In a recent blog post, YouTube announced it is developing new tools as part of its broader effort to protect creators’ identities and likenesses.
The platform’s “likeness management technology” is designed to help creators, actors, musicians, and other public figures identify AI-generated content showing their faces on YouTube. This allows individuals to easily spot deepfakes and request removal of unauthorized content.
The move follows YouTube’s July policy update, which introduced new measures allowing users to request the removal of AI-generated content that mimics their voices or faces. YouTube reiterated that any misuse of creator content violates its terms of service and must adhere to its Community Guidelines.
In addition to these protective measures, YouTube highlighted its generative AI tools, such as Dream Screen for Shorts, which have built-in safeguards to prevent misuse of the technology.
YouTube’s Expanding Efforts Against Deepfakes
YouTube is also enhancing its Content ID system with new technology for identifying AI-generated singing voices. This synthetic-singing detection technology will allow artists to manage and remove unauthorized AI-generated content using their vocals. A pilot program for this new feature is planned for early next year.
Although there’s no specific timeline yet for the rollout of the deepfake detection tools, YouTube’s focus on protecting creators marks a proactive step toward addressing the growing concern over deepfakes.