We seem to be getting a real-world demonstration of just how easily "artificial intelligence" algorithms can be manipulated into prioritizing propaganda. Numerous users of X, formerly Twitter, are reporting that Elon Musk's proprietary "Grok" or "xAI" chatbot is now appending neo-Nazi-boosted claims about racist attacks on white farmers in South Africa that, to be clear, are not actually a thing. It's false propaganda spread primarily by white supremacist and neo-Nazi groups, propaganda that the Trump Administration has itself elevated as excuse for prioritizing white South Africans over all other immigrants.
But now it's turning up in "Grok" as a sort of pro-Apartheid Tourette's Syndrome—no matter what users actually ask it.
Oh my god
— Parker Molloy (@parkermolloy.com) 2025-05-14T18:11:27.153Z
— sonboy megapractical (@jimpjorps.bsky.social) 2025-05-14T16:38:26.149Z
@jimpjorps theory of how this happened seems the most likely explanation:
apparently Elon's gotten so mad about Grok not answering questions about Afrikaners the way he wants, xAI's now somehow managed to put it into some kind of hyper-Afriforum mode where it thinks every question is about farm murders or the song "Kill the Boer"
Gizmodo tested the new behavior for itself and determined that yup, it's true.
There's little to no chance Musk's A.I.-backed chatbot could have stumbled into this behavior on its own. Engineers plainly tweaked the algorithm—one would presume at South African immigrant and X owner Elon Musk's direct instruction—to prioritize the propagandistic claims of his white supremacist allies.
But Elon is known for demanding complicated things be done swiftly and stupidly, so in this case: It broke. The engineers boosted the priority of the neo-Nazi claims so high that Grok began to insert those preferred responses into responses to any sort of user query. And now everyone can see that they did.
This is a very fitting example of one of the biggest dangers of current "A.I." large language models. By design, their responses are highly manipulatable—that is how the algorithms are "taught" to begin with. It isn't just easy to prioritize some speech over competing speech, it's an essential feature of the technology.
That means that the owner of any particular AI can manipulate that product to prioritize pseudo-"facts" preferred by that owner while discouraging or ignoring contrary information. It is, in fact, trivial. And that, in turn, makes current AI models tremendously valuable in the propagation of disinformation.
If a plutocrat like Elon Musk orders company engineers to prioritize claims popular in racist conspiracy circles, the AI can be swiftly altered to do so. If the AI owner wants to boost false election conspiracies, or propaganda targeting immigrants, or bury information about its own corporate malfeasance, or prioritize supposed "evidence" that tax cuts to oligarchs are good policy—all of it can be tweaked with little more than the push of a button.
When school children use that AI to assist in homework, those are the claims they will be fed. When voters ask the AI to summarize ballot issues or candidacies, the responses will be weighted to whichever billionaire preferences have been inserted into the system. Nationwide. Worldwide. And, with the increasing rate with which "A.I." agents cannibalize the output of other "A.I." agents to produce their own results, all such AI products will begin prioritize those claims in their own responses.
Grok's new obsession with these particular neo-Nazi conspiracies shows how easy it is for company owners to manipulate what the global public sees. And it's important to understand that the only reason we even know the algorithm has been tweaked to boost these particular white supremacist conspiracies is because someone inside X, someone whose name may or may not rhyme with Beelon Husk, demanded the work be done rapidly or the output be altered so dramatically that the whole damn thing exploded like a badly designed rocket.
But X engineers will patch the problem, and it won't be patched by removing the new white supremacist claims. They'll just dial down the priority of those claims until they appear in fewer circumstances; the false claims will still be served to users who ask related questions, and very few of those users will know that their supposedly "intelligent" friend has been custom-programmed to regurgitate them.
There's also another, even worse possibility: X engineers might not have prioritized the neo-Nazi claim specifically. Gizmodo notes that Musk himself has been aggressively boosting the "white genocide" conspiracy theory; it might be possible that Grok algorithms have been universally tweaked to heavily prioritize any new claim that Elon Musk personally makes on the platform, whether they be about South Africa, DOGE, or his own supposed video game prowess. You can't say he wouldn't demand such a thing, not after the reports that X engineers had to tweak company algorithms to prioritize Musk's posts above all others after Musk got mad over it.
Whatever the case, we only learned about these new manipulations because X engineers wrote the new "prioritize neo-Nazi conspiracies about South Africa" or "prioritize anything Elon Musk personally burps up" rule so badly that it briefly became an international joke. Imagine how many other rules the proprietors of this and every other new "A.I." product have already plugged into their products that we haven't learned about. Imagine how many we'll likely never learn about, but which are already ticking away in every "A.I." enabled product you brush up against.
UPDATE: Well, this is ... interesting. "[W]hich I'm instructed to accept as real"?
WTF is this x.com/grok/status/...
— Brendan Nyhan (@brendannyhan.bsky.social) 2025-05-14T20:27:52.331Z
Comments
We want Uncharted Blue to be a welcoming and progressive space.
Before commenting, make sure you've read our Community Guidelines.