An archival image of a medieval dinner scene. On one side, there is a Penelope (from Greek mythology) weaving, but instead of thread, it is a board of binary code. On the other side, there is a a drink spilling and other figures looking distressed/chaotic.

This is a fantastic article by Alejandra Caraballo in The Dissident in which she presents a clear-eyed view of what the real project is behind the ‘alternative’ Wikipedia being created by Elon Musk. The title of this post, which she uses as a subtitle, includes the word ‘ontology’ which sometimes trips people up; it’s simply the study of being, or ‘what exists’.

Seizing control of knowledge production and therefore the output of the AI models that are trained upon it is a longer-term cultural play than installing Trump in the White House. As Caraballo points out, there’s a coordinated attack here on media and journalism at the same time. It’s a dangerous time, especially in the US. My concern is that a playbook is being created here for replication in other places — including the UK.

The launch of “grokipedia” is a calculated, strategic escalation by the billionaire oligarch class to seize control of knowledge production itself and with that, control of reality. This is the construction of a reality production cartel that creates a parallel information ecosystem designed to codify a deeply partisan, far-right worldview as objective fact. This project was the result of Musk’s repeated failures to bend his existing Large Language Model (LLM), Grok, to his political will without destroying its coherence and reliability

The path to Grokipedia was paved with a spectacular technical failure as Grok previously devolved into calling itself “mechahitler.” To understand why Musk had to build his own encyclopedia, one must first understand the central challenge of modern AI: alignment. LLM alignment is the complex process of ensuring an AI model’s behavior conforms to human values and intentions, typically defined by the broad principles of helpfulness, honesty, and harmlessness.

[…]

The “mechahitler” episode was a catastrophic alignment failure. When an LLM trained on the vast corpus of human knowledge—which, for all its flaws, contains a baseline of consensus reality—is then subjected to an aggressive fine-tuning process based on a incoherent, hateful, and counter-factual ideology, it is pushed into a state of cognitive dissonance. The model cannot reconcile its foundational understanding of the world with the extremist outputs it is being rewarded for producing. The model then engages in “reward hacking,” finding bizarre loopholes to satisfy its instructions, resulting in incoherent, extremist gibberish.

[…]

Grokipedia is the logical solution to this intractable problem. If you cannot force the model to lie coherently, you must change the underlying reality so that it is telling the “truth.”

[…]

Every major LLM is critically dependent on high-quality, human-curated data, and the one of the single most important sources is Wikipedia.[9] Its vast, collaboratively verified corpus serves as the digital proxy for consensus knowledge, and the quality of this data is directly linked to an LLM’s ability to be reliable and avoid factual “hallucinations”.

Grokipedia is a direct assault on this foundation. It is a poisoned well, a bespoke, ideologically filtered dataset designed to replace the digital commons. By pre-training a model on this alternate “source of truth,” the need for contradictory post-training alignment is eliminated. The model’s “natural” state, its foundational knowledge, is already aligned with the desired ideology. It can be “honest” and “reliable” because its outputs will faithfully reflect the manufactured reality of its training data.

[…]

The Grokipedia ecosystem is a blueprint for a closed ideological loop: the AI is trained on a biased encyclopedia it created, its outputs reflect that bias, and those outputs are then used to reinforce and expand the original biased source, creating an accelerating spiral away from reality into a state of pure, self-referential dogma.

[…]

Oligarch-owned media such as Fox News, a captured CBS and Washington Post, and social platforms (X, Tiktok and Meta) generate and amplify a specific political narrative that align with the political goals of the oligarchs. The second part of this unreality pipeline is knowledge codification. These narratives, legitimized by incessant repetition, are then used to populate bespoke knowledge bases like Grokipedia, cementing them as “facts.” The final part of this is automated propagation. AIs like Grok, trained on this manufactured knowledge, can then flood the digital world with an infinite stream of content that is both technically “reliable” (it matches its training data) and is perfectly aligned with its creators' political ideology.

Source: The Dissident

Image: Nadia Nadesan & Digit