Koo launches new safety features for proactive content moderation

Koo announced the launch of new proactive content moderation features designed to provide users with a safer and secure social media experience.


  • New features will detect and block nudity and child sexual abuse materials in less than 5 seconds.
  • Promotes positivity by hiding toxic comments and hate speech.
  • Labels fake news to restrict misinformation on the platform.

India’s microblogging platform, Koo announced the launch of new proactive content moderation features designed to provide users with a safer and secure social media experience. The new features developed in-house are capable of proactively detecting and blocking any form of nudity or child sexual abuse materials in less than 5 seconds, labeling misinformation and hiding toxic comments and hate speech on the platform. Also Read - Twitter-rival Koo offers lifetime free verification for notable personalities: Details here

As per company, it has identified few areas which have a high impact on user safety i.e child sexual abuse materials, toxic comments and hate speech, misinformation and disinformation, Koo working to actively remove their occurrence on the platform. The new content moderation features are an important step towards achieving this goal. Also Read - Twitter rival Koo integrates ChatGPT to help users create content

Safety Features:

Nudity: Koo’s in house ‘No Nudity Algorithm’ proactively and instantaneously detects and blocks any attempt by a user to upload a picture or video containing child sexual abuse materials or nudity or sexual content. These detections and blocking take less than 5 seconds.

Toxic Comments and Hate Speech:

Actively detects and hides or removes Toxic Comments and Hate Speech in less than 10 seconds so they are not available for public viewing.


Content containing excessive blood / gore or acts of violence are overlaid with a warning for users.


Koo’s in-house ‘MisRep Algorithm’ constantly scans the platform for profiles who use the content or photos or videos or descriptions of well-known personalities to detect impersonated profiles and block them. On detection, the pictures and videos of well known personalities are immediately removed from the profiles and such accounts are flagged for monitoring of bad behavior in the future, the company claims.

Mayank Bidawatka, Co-founder, Koo said, “At Koo, our mission is to unite the world and create a friendly social media space for healthy discussions. We are committed to providing the safest public social platform for our users. While moderation is an ongoing journey, we will always be ahead of the curve in this area with our focus on it. Our endeavor is to keep developing new systems and processes to proactively detect and remove harmful content from the platform and restrict the spread of viral misinformation. Our proactive content moderation processes are probably the best in the world!”

  • Published Date: March 23, 2023 3:03 PM IST
  • Updated Date: March 23, 2023 3:08 PM IST
For the latest tech news across the world, latest PC and Mobile games, tips & tricks, top-notch gadget reviews of most exciting releases follow Techlusive India’s Facebook, Twitter, subscribe our YouTube Channel. Also follow us on  Facebook Messenger for latest updates.