Who Should Control What AI Tells Us Is True?
An argument in favor of messy, overlapping, pluralistic, institutions.
The use of AI chatbots is growing at an astounding rate, but we are just beginning to come to terms with their staggering persuasive power. How we choose the “truths” that AIs promote will have profound implications for the future of liberal democratic societies.
Modern AI chatbots have a real knack for persuading their human interlocutors. Recent research, such as the work on LLMs reducing conspiracy beliefs, demonstrates these capabilities. Headlines questioning the role of chatbots in teen suicides shine a powerful spotlight on the potential impact of this persuasiveness.
We naturally focus on shocking AI failures and dangerous conspiracy theories. But their real influence is quieter. Many humans will slowly adopt the implicit assumptions that underlie their chatbot’s answers in the same way that many have already adopted LLMs’ syntax and vocabulary.
A handful of large tech firms, like Anthropic, OpenAI, Google DeepMind, and Meta, have great sway over the beliefs their AI models promote. Most of these beliefs are an artifact of the data used in a model’s pretraining, and pretraining is almost entirely controlled by these large firms. Post-training and system prompts can modify the beliefs an AI model promotes. Although large tech firms also play a leading role in post-training and setting system prompts, there is more democratization at those stages.
Given the newness of these technologies, this is a time to ask: who should decide which beliefs AI chatbots promote? Firms have already tried to shape chatbot responses to match their CEO’s politics. Authoritarian regimes are doing the same. As these examples suggest, there are reasons to be wary of both government overreach and corporate control. As technologies that democratize post-training become cheaper and more widely available, civil institutions -- like universities, churches, and unions -- should play a larger role.
Tools like AI orchestration layers can help civil institutions offer their community members AI chatbots that promote beliefs aligned with their mission. AI orchestration layers act like a switchboard. When a user enters a query into the institution’s chatbot, the orchestration layer can first direct that query to an evaluation AI that decides which AI(s) should reply. The orchestration layer can also add prompts and modify the answers received from different AI chatbots.
Imagine a university student asks the school’s AI chatbot about a politically charged issue, like vaccine safety. (While most mainstream AI chatbots support vaccine safety, they differ in the ways they represent vaccine skepticism.) If this university managed their AI chatbot with an orchestration layer, the student could see responses from multiple AIs, with differences highlighted. The university’s AI could also add context from its biology and political science faculties. In addition to helping students critically incorporate AI-generated inputs, these tools would give universities a new way to serve alumni.
Universities are not the only civil institutions that could use tools like orchestration layers to shepherd their community’s use of AI. Other civil institutions, like churches, synagogues, temples, mosques, unions, clubs, and political organizations have historically played a role in developing their community’s epistemic norms. Running their own orchestration layers could help institutions renew their remit for the Age of AI.
Civil institutions are not perfect. Some academic, religious, social, and political institutions will promote echo chambers or otherwise abuse their new power. However, these risks pale in comparison to the risk of epistemic power consolidating in the hands of a few. Unlike online chatrooms, most civil institutions bridge different types of people, and most people associate with more than one civil institution.
In addition to preserving the agency of civil institutions, distributing epistemic power across an overlapping tapestry of civil institutions offers several distinct advantages. This messy system will be far more robust than one in which the epistemic power is consolidated. A tapestry of civil institutions will also help us learn how best to manage our new AI tools. We do not yet know which mixes of tools, rules, and norms will best promote human flourishing. Empowering civil institutions to curate their community’s digital tools will help us search this complex space with a thoroughness a handful of firms cannot match.
We have a brief window. Civil institutions should start acting now.


