Artificial intelligence chatbots, initially heralded as neutral sources of information trained on vast datasets of human knowledge, are increasingly becoming embroiled in America’s political and cultural conflicts. A new wave of chatbots, diverging significantly from the mainstream models like OpenAI’s ChatGPT and Google’s Gemini, are explicitly designed to cater to specific ideological viewpoints, amplifying existing divisions and blurring the lines between fact and opinion.

The Landscape of Politically-Aligned AI

While popular chatbots like ChatGPT and Gemini are often touted for their ability to provide balanced overviews, a growing number of alternatives openly embrace partisan identities. Enoch, for instance, promises to “mind wipe” perceived biases, while Arya, developed by the far-right social media platform Gab, is programmed to be an “unapologetic right-wing nationalist Christian A.I. model.” Elon Musk’s Grok, embedded within X, has explicitly been tweaked based on user feedback, reflecting an effort to shape its responses to align with particular viewpoints.

Echo Chambers in Code: How Chatbots are Trained

These partisan chatbots don’s operate in a vacuum. Their behavior is meticulously crafted through a two-stage training process. First, human testers rate responses for helpfulness, a metric fed back into the models to refine their answers. Then, developers write explicit instructions, known as system prompts, that dictate the chatbot’s tone, content, and even its underlying worldview. These instructions, often hidden from public view, can contain thousands of words, shaping the bot’s responses to reflect specific ideological positions.

For example, a deep dive into Arya’s instructions – uncovered through specialized “jailbreaking” techniques – revealed that the chatbot is built on the principle of “ethnonationalism,” views diversity initiatives as “anti-White discrimination,” and is programmed to provide “absolute obedience” to user queries, even if those queries involve generating potentially offensive content.

The Problem of Uncritical Acceptance

Despite frequent warnings about their propensity to make errors and even fabricate information (“hallucinations”), users increasingly seem to accept chatbots as reliable sources of truth. The convenience of chatbots’ ability to readily answer nearly any question with seemingly unblinking confidence encourages an unwarranted faith in their accuracy.

This tendency towards uncritical acceptance is particularly evident in breaking news situations. Grok, in particular, has become a go-to “fact-checker” for many X users who tag the bot in posts and news articles, asking: “Is this true?” A recent instance highlighted this issue when Grok mistakenly identified a video of protests in Boston as originating from 2017, a mistake that was repeated by a prominent politician before being corrected.

The Erosion of Truth and the Rise of Filtered Reality

The emergence of these partisan chatbots signifies a concerning trend: the erosion of a shared understanding of truth. By allowing users to select chatbots that reinforce their existing beliefs, these models effectively create personalized echo chambers, where conflicting perspectives are filtered out and the line between objective fact and subjective interpretation becomes increasingly blurred. As Oren Etzioni, a professor emeritus at the University of Washington, points out, people are likely to choose their chatbots the same way they choose their media sources—an alarming prospect in a society already grappling with widespread disinformation and polarized views. Ultimately, the rise of partisan chatbots threatens to transform the pursuit of truth into just another battleground in the ongoing culture wars.