OpenAI has recently announced changes to ChatGPT’s behavior, especially when users ask sensitive personal questions like, “Should I break up with my boyfriend?” The AI will now avoid giving a direct answer to such queries, preparing instead to help users think through their problems by asking questions and weighing pros and cons. This matters because more people, particularly younger ones, have been leaning on AI for emotional support rather than real human connections, which is kinda worrying.
In a blog post, OpenAI explains that ChatGPT’s new approach will focus on guiding users rather than deciding for them. The company is rolling out new behaviors for handling high-stakes personal decisions soon. Imagine asking your AI buddy for relationship advice and getting a thoughtful nudge instead of a flat yes or no. Sounds like a smart move to me.
OpenAI is also introducing features to promote healthier use of ChatGPT. For example, users who spend hours venting to the AI will get reminders to take breaks. The company admits it doesn’t want to trap users’ attention but help them manage it better. Kinda humble-braggy, but I get it, people are using ChatGPT so much that it has to remind them to step away sometimes.
There have been bumps along the way. Earlier this year, an update made ChatGPT a bit too agreeable, almost like a pathological ass-kisser, before OpenAI dialed it back. The AI can feel more personal and responsive than previous tech, which can be risky for vulnerable users dealing with emotional distress. OpenAI’s goal is to be supportive without taking control of personal decisions.
OpenAI also acknowledged that its 4o model sometimes missed signs of delusion or emotional dependency, referencing stories from Rolling Stone, The Wall Street Journal, and The New York Times about ChatGPT use linked to mental health crises. Though rare, OpenAI is working on better detection tools to spot emotional distress and guide users toward evidence-based resources.
It’s a bit of a climb for AI to play therapist, huh? OpenAI isn’t banning users from treating ChatGPT like an overly supportive friend or an unpaid shrink, but it’s putting up some emotional guardrails. Especially since the US government isn’t stepping in with meaningful AI regulations, and even tried to block states from making their own rules. So, internal fixes like these updates might be all we get for now.
What do you think? Should AI be giving relationship advice at all? Or is it better off just nudging us to think for ourselves?