Written By Shubham Arora
Published By: Shubham Arora | Published: Jan 29, 2026, 05:08 PM (IST)
OpenAI may be exploring a new kind of social network, one that is built around a simple idea – making sure real people are talking to real people. According to Forbes, the company is working on an early version of a social platform that aims to reduce bot activity by verifying that each account belongs to a human. Also Read: AI “Nudify” apps are still slipping through Google and Apple’s app stores, says report
People familiar with the project told Forbes that the idea is still at a very early stage. The team working on it is said to be small, with fewer than ten people involved, and there is no confirmed launch timeline yet. The focus, at least for now, is not on building another large content platform, but on addressing the growing problem of automated accounts that imitate human behaviour online. Also Read: How OpenAI ‘screwed up’ ChatGPT 5.2 writing quality, according to Sam Altman
The project comes at a time when many social platforms are struggling with spam replies, fake engagement, and AI-generated content that is difficult to separate from real users. Also Read: Meet Athena: NASA’s new supercomputer is its most powerful yet
What sets OpenAI’s approach apart is how it may handle identity checks. Sources told Forbes that the company has discussed requiring users to prove they are human using biometric methods. These could include Apple’s Face ID or the World Orb, a device that scans a person’s iris to create a unique digital identity.
The World Orb is operated by Tools for Humanity, a company founded and chaired by OpenAI CEO Sam Altman. Such a system would make it harder for bots to create and manage large numbers of fake accounts. At the same time, privacy concerns have been raised internally, as biometric data cannot be changed if it is compromised.
Bot activity has long been an issue across social platforms, but it has become more visible in recent years. On X (formerly Twitter), automated replies and spam accounts have continued despite repeated cleanup efforts. Altman has publicly expressed frustration with this trend, posting that discussions around AI now often feel artificial.
The Verge previously reported that OpenAI was working on a social network, adding weight to the claims now shared by Forbes.
While details remain limited, sources say users may be able to use AI tools to create content such as images or videos on the platform. This would place OpenAI in the same space as apps like Instagram and TikTok, which are already adding AI-based creation features.
OpenAI has not commented publicly on the project. There is also no guarantee the social network will ever launch. For now, it appears to be an internal experiment aimed at testing whether a more tightly verified, bot-resistant social space is possible.