OpenAI rolls back update that made ChatGPT a sycophantic mess

INSUBCONTINENT EXCLUSIVE:
OpenAI, along with competitors like Google and Anthropic, is trying to build chatbots that people want to chat with
as harsh or dismissive
For lack of a better word, it's increasingly about vibemarking.When Google revealed Gemini 2.5, the team crowed about how the model topped
the LM Arena leaderboard, which lets people choose between two different model outputs in a blinded test
The models people like more end up at the top of the list, suggesting they are more pleasant to use
But overall, people like models that make them feel good
The same is true of OpenAI's internal model tuning work, it would seem. An example of ChatGPT's overzealous praise.
Credit: /u/Talvy An
example of ChatGPT's overzealous praise. Credit: /u/Talvy It's possible
this pursuit of good vibes is pushing models to display more sycophantic behaviors, which is a problem
Anthropic's Alex Albert has cited this as a "toxic feedback loop." An AI chatbot telling you that you're a world-class genius who sees the
unseen might not be damaging if you're just brainstorming
However, the model's unending praise can lead people who are using AI to plan business ventures or, heaven forbid, enact sweeping tariffs,
to be fooled into thinking they've stumbled onto something important
In reality, the model has just become so sycophantic that it loves everything.The constant pursuit of engagement has been a detriment to
numerous products in the Internet era, and it seems generative AI is not immune
OpenAI's GPT-4o update is a testament to that, but hopefully, this can serve as a reminder for the developers of generative AI that good
vibes are not all that matters.