Annoyed ChatGPT users complain about bot’s relentlessly positive tone

INSUBCONTINENT EXCLUSIVE:
Owing to the aspirational state of things, OpenAI writes, "Our production models do not yet fully reflect the Model Spec, but we are
continually refining and updating our systems to bring them into closer alignment with these guidelines."In a February 12, 2025 interview,
members of OpenAI's model-behavior team told The Verge that eliminating AI sycophancy is a priority: future ChatGPT versions should "give
honest feedback rather than empty praise" and act "more like a thoughtful colleague than a people pleaser."The trust problemThese
the University of Buenos Aires.Carro's paper suggests that obvious sycophancy significantly reduces user trust
In experiments where participants used either a standard model or one designed to be more sycophantic, "participants exposed to sycophantic
behavior reported and exhibited lower levels of trust."Also, sycophantic models can potentially harm users by creating a silo or echo
chamber for of ideas
In a 2024 paper on sycophancy, AI researcher Lars Malmqvist wrote, "By excessively agreeing with user inputs, LLMs may reinforce and amplify
existing biases and stereotypes, potentially exacerbating social inequalities."Sycophancy can also incur other costs, such as wasting user
time or usage limits with unnecessary preamble
how much money OpenAI has lost in electricity costs from people saying 'please' and 'thank you' to their models." Altman replied, "tens of
work-arounds exist, although they aren't perfect, since the behavior is baked into the GPT-4o model
For example, you can use a custom GPT with specific instructions to avoid flattery, or you can begin conversations by explicitly requesting
a more neutral tone, such as "Keep your responses brief, stay neutral, and don't flatter me."