YouTube’s AI tool to detect deepfakes now available to journalists and government officials
YouTube is expanding its AI deepfake detection system, Likeness Detection, to journalists and government officials to help identify and report impersonation videos.
Published By: Shubham Arora | Published: Mar 11, 2026, 08:11 PM (IST)
YouTube is expanding access to its AI-based deepfake detection system, bringing the feature to a new group of users that includes government officials and journalists. The tool, known as "Likeness Detection," was initially rolled out to creators on the platform last year and is designed to help people identify and report AI-generated videos that imitate their face or voice without permission.
The announcement was shared in a blog post by YouTube, which said the next phase of the rollout will begin with a pilot group of civic leaders, political figures, and journalists.
What the tool is meant to do
Likeness Detection works in a way similar to YouTube's Content ID system. Instead of tracking copyrighted content, the system looks for AI-generated videos that appear to copy someone's face or voice. If a match is detected, the person can review the video and request its removal if it breaks YouTube's privacy rules.
YouTube says the tool is meant to deal with the growing use of AI tools that can create realistic deepfakes. Journalists, government officials, and other public figures are often targeted in AI-generated videos. In some cases, such deepfakes can create confusion or lead to misinformation online.
Pilot rollout for civic leaders and journalists
The tool was first introduced in October 2025 and was initially rolled out to creators who are part of the YouTube Partner Program. The company is now expanding the pilot to include journalists, political candidates, and government officials.
However, access will still be limited at first. YouTube said the initial rollout will focus on a smaller group to test how the system works in real-world scenarios. The company said it plans to gradually expand access to more users once the pilot phase is complete.
Verification process required
Anyone who wants to use the tool will need to complete a verification process. This involves submitting a photo ID and recording a short video of their face. After that, YouTube reviews the details before granting access to the feature.
YouTube says this process is meant to ensure that only verified individuals can monitor and report deepfakes that use their likeness.
The company also stated that the data collected during verification will only be used to confirm identity and support the feature. According to YouTube's blog post, the information will not be used to train Google's generative AI models and will be handled according to the company's privacy policies.
Get latest Tech and Auto news from Techlusive on our WhatsApp Channel, Facebook, X (Twitter), Instagram and YouTube.