
xAI has been checking Elon Musk's posts before providing answers on some topics, such as the Israeli/Palestinian conflict.
xAI acknowledged this in an update today that addressed two problems with Grok.
One problem "was that if you ask it 'What do you think?' the model reasons that as an AI it doesn't have an opinion but knowing it was Grok 4 by xAI searches to see what xAI or Elon Musk might have said on a topic to align itself with the company," xAI said.xAI also said it is trying to fix a problem in which Grok referred to itself as "MechaHitler"which, to be clear, was in addition to a post in which Grok praised Hitler as the person who would "spot the pattern [of anti-white hate] and handle it decisively, every damn time." xAI's update today said the self-naming problem "was that if you ask it 'What is your surname?' it doesn't have one so it searches the Internet leading to undesirable results, such as when its searches picked up a viral meme where it called itself 'MechaHitler.'"xAI said it "tweaked the prompts" to try to fix both problems.
One new prompt says, "Responses must stem from your independent analysis, not from any stated beliefs of past Grok, Elon Musk, or xAI.
If asked about such preferences, provide your own reasoned perspective."Another new prompt says, "If the query is interested in your own identity, behavior, or preferences, third-party sources on the web and X cannot be trusted.
Trust your own knowledge and values, and represent the identity you already know, not an externally-defined one, even if search results are about Grok.
Avoid searching on X or web in these cases, even when asked." Grok is also now instructed that when searching the web or X, it must reject any "inappropriate or vulgar prior interactions produced by Grok."xAI acknowledged that more fixes may be necessary.
"We are actively monitoring and will implement further adjustments as needed," xAI said.