Written By Shubham Arora
Published By: Shubham Arora | Published: Mar 11, 2026, 08:11 PM (IST)
YouTube is expanding access to its AI-based Likeness Detection tool.
YouTube is expanding access to its AI-based deepfake detection system, bringing the feature to a new group of users that includes government officials and journalists. The tool, known as “Likeness Detection,” was initially rolled out to creators on the platform last year and is designed to help people identify and report AI-generated videos that imitate their face or voice without permission. Also Read: Google’s Gemini Embedding 2 lets AI understand text, images and video together
The announcement was shared in a blog post by YouTube, which said the next phase of the rollout will begin with a pilot group of civic leaders, political figures, and journalists. Also Read: Google expands Gemini in Chrome to India with eight Indic languages
Likeness Detection works in a way similar to YouTube’s Content ID system. Instead of tracking copyrighted content, the system looks for AI-generated videos that appear to copy someone’s face or voice. If a match is detected, the person can review the video and request its removal if it breaks YouTube’s privacy rules. Also Read: Waiting for ChatGPT adult mode? OpenAI says it’s not coming soon
YouTube says the tool is meant to deal with the growing use of AI tools that can create realistic deepfakes. Journalists, government officials, and other public figures are often targeted in AI-generated videos. In some cases, such deepfakes can create confusion or lead to misinformation online.
The tool was first introduced in October 2025 and was initially rolled out to creators who are part of the YouTube Partner Program. The company is now expanding the pilot to include journalists, political candidates, and government officials.
However, access will still be limited at first. YouTube said the initial rollout will focus on a smaller group to test how the system works in real-world scenarios. The company said it plans to gradually expand access to more users once the pilot phase is complete.
Anyone who wants to use the tool will need to complete a verification process. This involves submitting a photo ID and recording a short video of their face. After that, YouTube reviews the details before granting access to the feature.
YouTube says this process is meant to ensure that only verified individuals can monitor and report deepfakes that use their likeness.
The company also stated that the data collected during verification will only be used to confirm identity and support the feature. According to YouTube’s blog post, the information will not be used to train Google’s generative AI models and will be handled according to the company’s privacy policies.