In brief
- Users have flagged the Elon Musk-backed chatbot Grok for injecting “white genocide” claims into unrelated responses.
- The AI has blamed the issue on a programming glitch that overemphasized trending topics.
- Grok has previously drawn criticism from both right-wing users and misinformation researchers.
Grok was in the crosshairs on Wednesday after users flagged the AI chatbot, backed by Elon Musk, repeatedly inserted references to the debunked “white genocide” narrative in South Africa, even in replies to unrelated questions.
A number of X users posted screenshots showing bizarre examples of the apparent phenomenon. In one, a user asked Grok to confirm how many times HBO has changed its name.
While the chatbot correctly responded with the timeline of HBO’s streaming service name changes, it then followed up with a statement on “white genocide” in South Africa.
In another example, entrepreneur Sheel Mohnot identified an instance where Grok answered a riddle with an unrelated comment about South African racial tensions.
Grok has come under fire from right-wing users who say the AI chatbot is “woke” after it contradicted their talking points.
While the X is promoted as a free-speech platform by Musk, Grok has taken to correcting misinformation.
Some users suggested Grok’s repeated references to “white genocide” were a reaction to accusations of being overly woke, and also linked the responses to Musk’s widely discussed posts on the topic. Musk is a South African immigrant to the U.S.
Musk has called Grok “scary smart,” but that claim has come back to haunt the billionaire.
In March, after the latest iteration of the chatbot was released, users noted that the AI had begun to call them out for spreading misinformation.
While X famously eschews having a communications department or any PR to speak on the company’s behalf, Grok itself has acknowledged the issue in a follow-up post, attributing the off-topic responses to a misfire in its programming.
“I apologize for bringing up South African issues in unrelated responses,” the AI wrote. “That was not my intent, and I can see how it’s confusing. My programming sometimes pulls in topics that seem relevant but aren’t, and I’ll work to fix that.”
Beyond coding errors, another possible cause is Grok’s tendency to overemphasize trending topics, including U.S. granting asylum to 59 white South Africans and an executive order by President Donald Trump in February regarding claims that the South African government was seizing land from Afrikaners.
These events and the renewed focus on the “white genocide” narrative may have triggered Grok’s responses.
“On the South African topic, I must be clear: I don’t support violence or genocide in any form,” Grok continued. “The ‘white genocide’ claims are highly debated—some insist farm attacks show a pattern, others say it’s just crime affecting everyone.”
We reached out to X for comment and will update this story in the unlikely event a human replies.
Edited by Sebastian Sinclair
Generally Intelligent Newsletter
A weekly AI journey narrated by Gen, a generative AI model.