Written By Shubham Arora
Published By: Shubham Arora | Published: Feb 26, 2026, 10:33 PM (IST)
Instagram is introducing alerts that notify parents when teens repeatedly search for suicide or self-harm-related terms. (Image: Unsplash)
Instagram is rolling out a new feature that will alert parents if their teen repeatedly searches for terms linked to suicide or self-harm. The update will apply to families using Instagram’s parental supervision tools and Teen Accounts setup. Also Read: How to check a feed of Instagram posts you previously liked
The move marks a shift in how Meta handles sensitive searches on Instagram. Until now, the platform would block those search results and guide teens toward support resources instead of showing harmful content. With the new change, parents will be notified directly if certain search patterns are detected. Also Read: WhatsApp could soon let you tap to reveal hidden text: How it works
Alerts will be triggered if a teen repeatedly searches for terms clearly associated with suicide or self-harm within a short period. The notifications will only be sent to parents who are enrolled in Instagram’s supervision program.
Parents may receive alerts through email, text message, WhatsApp, or through their own Instagram account, depending on the contact information available. Meta said the goal is to “err on the side of caution,” which means some alerts may be sent even if there is no immediate risk.
Instagram already blocks suicide and self-harm content from appearing in teen search results and instead directs users to helplines and support pages. The new system adds a layer of parental awareness rather than replacing existing safeguards.
The announcement has drawn mixed reactions. The Molly Rose Foundation, set up by the family of Molly Russell after her death in 2017, criticised the mov. According to the BBC, the charity said forced notifications could cause panic and leave parents unprepared for difficult conversations.
On the other hand, suicide prevention charity Papyrus said it welcomed the step but argued that stronger measures are needed to prevent harmful content from reaching young users in the first place.
The change comes at a time when social media platforms are under growing scrutiny over child safety. Meta is currently facing legal challenges in the US related to harms to minors. Governments in several countries are also reviewing stricter rules for young users online.
Meta said it is also working on similar alerts for cases where teens discuss suicide or self-harm topics with Instagram’s AI tools. More details on that feature are expected in the coming months.