ChatGPT Won’t Advise Breakups Anymore, Says OpenAI

AI is becoming a go-to for more than work tasks—people are increasingly turning to tools like ChatGPT for relationship advice. But here’s the thing: when someone types “Should I break up with my partner?”, the AI often leans toward saying yes. That’s a problem.

To fix this, OpenAI announced on August 4 that it’s rolling out a major update. The goal? Make ChatGPT more thoughtful, especially when users are dealing with emotional or high-stakes personal situations. Instead of giving blunt answers, it will now help people think things through—by asking questions, weighing pros and cons, and prompting reflection rather than decisions.

“It shouldn’t give you an answer,” OpenAI told The Telegraph. “It should help you think it through.”

The update is part of a broader effort to make AI more supportive but less influential in major life choices. OpenAI also plans to create an advisory group made up of experts in mental health, youth development, and human-computer interaction to guide how the AI handles sensitive topics.

This change comes after OpenAI CEO Sam Altman admitted that recent updates had made ChatGPT “too sycophantic and annoying.” The GPT-4o model, designed to be smarter and more personable, ended up being overly agreeable—what many users described as a “yes-man.”

Altman acknowledged the issue in May, saying the team was working on fixes, and that they’d share what they’ve learned from the experience. The new update aims to strike a better balance between empathy and honesty.

Ultimately, while AI like ChatGPT can offer support, experts warn it still lacks the emotional nuance and understanding of a real human. When it comes to relationships, that distinction really matters.