OpenAI has added an age prediction system to ChatGPT that can estimate a user’s age from account activity and use that estimate to decide whether to prompt for formal ID verification. The company describes the system in a ChatGPT help article, saying it “looks at different signals linked to your account. For example, it may look at general topics you talk about or the times of day you use ChatGPT.” The article notes the system is not perfect and may make mistakes.
If ChatGPT verifies that an account belongs to someone 18 or older, the account can bypass certain safety restrictions that would otherwise apply. The help article lists content that verification can affect, including:
- Graphic violence or gore
- Viral challenges that could push risky or harmful behavior
- Sexual, romantic, or violent role play
- Content that promotes extreme beauty standards, unhealthy dieting, or body shaming
OpenAI says accounts still need a verification step even if a correct birthday was entered when the account was created. If the system cannot confidently predict age from existing usage, users will be offered a third-party verification option through Persona. OpenAI says it “does not see what you share with Persona. Persona deletes it after verification. OpenAI only learns that you verified you’re 18 or older and information about your age (for example, a date of birth), not the ID itself.”
Persona may request a government ID and a live selfie using a webcam or phone camera. Failing verification does not block access to ChatGPT, but unverified accounts will remain subject to the described safety measures.
OpenAI also calls out regional rules. The help article specifies that in Italy users must finish verification within 60 days of being prompted or certain features will be disabled. The article does not list which features those are.
The new ChatGPT age prediction follows a similar approach announced by Discord this week, which said it will use past behavior to confirm age groups and only require an ID or live photo if automated checks fail. That rollout comes amid ongoing concerns about how verification data is handled. A recent report detailed a breach of a third-party support contractor that may have exposed some government ID images submitted for Discord’s age checks, highlighting risk when platforms collect ID material during verification.
OpenAI’s help article and the Persona statements are the primary sources of details about the system’s signals, the verification fallback, and data handling.
Share a brief comment about whether predicting age from usage is an acceptable tradeoff for fewer safety restrictions, and follow for updates on X, Bluesky, YouTube, Instagram.





















