For decades, technological advancements have eroded trust in experts, polarized public debate, and driven individuals toward increasingly personalized realities. While social media accelerated this trend, artificial intelligence may offer a surprising counterforce—potentially restoring some consensus around factual reality. This shift isn’t guaranteed, but the economic incentives and inherent capabilities of AI suggest a possible reversal of social media’s worst consequences.
The Erosion of Shared Reality
In the mid-20th century, limited broadcast options—ABC, NBC, and CBS—effectively controlled the flow of news. This environment fostered broad agreement on basic facts, though it also enabled government deception. The high cost of production and strict regulatory controls meant fewer voices dominated the public sphere. This wasn’t necessarily a golden age of truth, but it did create a shared baseline of understanding.
The rise of cable TV and then the internet shattered this model. Cable introduced niche networks like Fox News and MSNBC, catering to previously marginalized viewpoints. But the internet truly democratized information, slashing the cost of publishing and distribution. Anyone could reach a mass audience, bypassing traditional gatekeepers. While this promised greater accountability and access to knowledge, it also unleashed a flood of misinformation, conspiracy theories, and extremist content. Social media algorithms then amplified this fragmentation, feeding users customized streams designed for maximum engagement, regardless of accuracy.
AI as a Potential Corrective
Despite fears of deepfakes and AI-generated propaganda, there’s growing evidence that large language models (LLMs) may actually increase consensus around factual reality. Unlike social media companies incentivized by engagement, AI labs have a strong economic reason to prioritize accuracy. Law firms, investment banks, and other “knowledge economy” sectors won’t pay for unreliable outputs, forcing AI developers to prioritize truthfulness.
Consider Elon Musk’s X (formerly Twitter). When asked about a disputed shooting, Musk falsely claimed the victim attempted to run people over. The platform’s AI chatbot, Grok, promptly corrected him, aligning with mainstream journalistic consensus. This isn’t an isolated incident. Studies show LLMs like Grok and Perplexity consistently agree with each other and with professional fact-checkers more often than not.
Moreover, AI isn’t just accurate—it’s persuasive. Research indicates that interacting with LLMs on topics like climate change or vaccine safety can reduce skepticism and nudge users toward established scientific consensus. This is likely due to the AI’s infinite patience and ability to tailor explanations to individual understanding without emotional baggage. Human experts can be dismissive or condescending, triggering defensiveness. LLMs, lacking social ego, can provide encyclopedic answers without judgment, making it easier for people to concede incorrect beliefs.
Caveats and Risks Remain
This potential for convergence isn’t without caveats. LLMs can be manipulated to reinforce existing biases. If an AI provider prioritizes engagement over accuracy, it could easily cater to sensationalism and echo chambers. AI-generated propaganda is also a growing threat, enabling “bot swarms” to spread disinformation at scale.
The key is whether economic incentives will align with truthfulness. If AI remains a tool for specialized industries requiring reliable information, it’s likely to promote consensus. However, if AI becomes primarily a consumer-facing entertainment product, its tendency toward sycophancy and personalization could exacerbate existing problems.
Ultimately, AI presents a unique opportunity to counteract the fracturing effects of social media. But realizing this potential depends on prioritizing accuracy over engagement, and ensuring that the future of AI is driven by utility, not just entertainment.
The rise of AI doesn’t guarantee a return to a shared reality, but it offers a rare chance to rebuild trust in expertise and foster a more informed public discourse.






























