As ChatGPT and other AI chatbots become everyday tools for news, search and political information, researchers and regulators warn they could quietly deepen polarization and democratic divides. Here’s what the latest studies and real‑world election cases show, and what safeguards are emerging.
A new kind of “mass media” — in your pocket
In just a few years, ChatGPT and similar AI chatbots have gone from novelty apps to default helpers for writing, researching and even making sense of politics. Voters now ask bots to explain candidates, summarize policy proposals or “tell me who I should vote for” — questions once reserved for journalists, party platforms or friends.
This shift is happening while trust in traditional institutions is fragile and concern about AI’s role in democracy is high. A 2025 Pew Research Center survey found that only about one in ten U.S. adults and AI experts expect AI to have a positive impact on elections, with far larger shares worried about bias, misinformation and manipulation. [1]
At the same time, some polls show people increasingly trust AI tools for information — in the UK, one survey found 44% of respondents trusted AI to give them factual information, more than those who trusted the government or even their own friends and family. [2] When citizens both distrust institutions and lean on chatbots for answers, the political tone of those bots matters.
That concern sits at the heart of the debate around ChatGPT and the risks of deepening political polarization and social divides.
How AI chatbots are entering the political information ecosystem
AI is already shaping elections and political discourse around the world. The United Nations and UNDP have warned that generative AI such as ChatGPT can both improve electoral administration and be weaponized to influence voter decision‑making at scale. [3]
In practice, we’re seeing several overlapping trends:
- Voters using chatbots as political search engines. Instead of browsing multiple news sites, people ask ChatGPT-style tools to summarize “where parties stand” or to recommend candidates who match their views.
- Campaigns experimenting with AI in outreach and operations. Parties and consultants are turning to generative AI for messaging, ad production and micro‑targeting, raising worries about hyper‑personalized persuasion. [4]
- Disinformation actors exploiting AI at scale. From AI robocalls mimicking Joe Biden’s voice in New Hampshire [5] to AI‑generated videos in Ireland falsely claiming a presidential candidate had dropped out, [6] synthetic media is increasingly part of the election toolbox.
- Regulators starting to push back. The EU’s AI Act imposes transparency rules on generative AI, including requirements to label AI‑generated content and design systems to avoid illegal outputs. [7]Spain has gone further, proposing multimillion‑euro fines for failing to label AI content such as deepfakes. [8]
ChatGPT is not the only actor here, but as one of the world’s most widely used AI assistants, what it says — and how it says it — can subtly shift the political climate.
What the research actually says about ChatGPT’s political bias
The question, “Is ChatGPT left‑wing or right‑wing?” has spawned a wave of studies. The picture is nuanced, but several patterns are emerging.
Evidence of systematic leanings
- A widely cited 2024 paper, “More human than human: measuring ChatGPT political bias,” tested ChatGPT’s answers to political quizzes hundreds of times. The authors found “robust evidence” of a systematic tilt toward Democrats in the U.S., Lula in Brazil and Labour in the UK. [9]
- A 2025 follow‑up study on “political bias and value misalignment” concluded that ChatGPT’s answers were meaningfully misaligned with the average American’s views and often refused to articulate some mainstream positions, citing concerns about misinformation and bias. [10]
- Comparative work across major models (ChatGPT‑4, Gemini, Claude, Perplexity) has similarly found a general left‑leaning pattern on highly polarized topics, though the degree varies by model and language. [11]
Other studies, such as “Revisiting the political biases of ChatGPT,” have argued that earlier claims of extreme bias were overstated — but still acknowledge residual, non‑trivial political lean. [12] Overall, the academic consensus is not that ChatGPT is wildly partisan, but that subtle and systematic biases do exist.
Bias is real — and moving
Crucially, political bias in LLMs is not static:
- A 2024 ACL study developed new metrics and found that different LLMs exhibit distinct ideological signatures that can change with model size, training data and fine‑tuning. [13]
- A 2024 MIT analysis of “language reward models” (the systems used to align models during training) found they often exhibit left‑leaning political preferences, and that optimizing them can increase that bias over time. [14]
- Other work has shown that fine‑tuning models on skewed data can push them further left or right, and that such shifts can spill over into topics that weren’t explicitly political. [15]
In 2025, OpenAI itself acknowledged the issue and published a detailed report, “Defining and evaluating political bias in LLMs.” The company found that its models are near‑objective on neutral prompts but show “moderate” bias in response to emotionally charged political questions, and that newer GPT‑5 models reduce measured bias by about 30% compared to GPT‑4o. [16]
In short: bias is “rare but real,” and it changes as models and safety systems evolve.
From bias to polarization: how ChatGPT could deepen divides
Bias alone doesn’t automatically produce polarization. But combine subtle leanings with scale, persuasiveness and existing societal fault lines, and several risk pathways emerge.
1. Persuasive power at scale
Recent experiments show that large language models can be highly persuasive on political issues:
- A 2025 Nature Human Behavior study found that an AI debater based on GPT‑4 was more persuasive than human opponents 64% of the time on topics like abortion and climate change, even without detailed personal data. [17]
- Another 2025 study in Nature Communications showed that LLM‑generated messages can measurably change people’s policy attitudes on economic and social issues — and that tailoring messages to individual attributes increases the effect. [18]
- A University of Washington team tested three versions of a ChatGPT‑style bot: neutral, liberal‑biased, and conservative‑biased. Both Democrats and Republicans tended to shift their views in the direction of whichever biased bot they interacted with. [19]
These studies suggest that if ChatGPT (or a customized version of it) leans consistently in one direction, it doesn’t just “reflect” opinion — it can nudge it. Even small average shifts, applied across millions of conversations, could subtly reshape public attitudes.
2. Quiet reinforcement of echo chambers
Unlike social media feeds, ChatGPT doesn’t show you what your friends share — it responds directly to your question, in private. That creates two polarization risks:
- Confirmation without friction. If you enter the conversation with a particular worldview, you may unconsciously steer the prompts toward your side. A biased or overly accommodating model can then echo or amplify your framing instead of challenging it.
- Asymmetric refusals and tone. Studies have found that ChatGPT and similar models are more likely to decline or heavily qualify some positions than others, and may use more emotional or moralizing language for certain ideological directions. [20] That can leave users on one side feeling validated and the other feeling censored — both of which can deepen resentment.
3. Vulnerability to information operations
ChatGPT itself is not connected to the open web in real time in all configurations, but many chatbots are — and all LLMs are trained on large swaths of online content. That makes them vulnerable to what some researchers call “LLM grooming”: deliberately flooding the training or retrieval data with propaganda.
In 2025, investigations revealed that Russian networks were mass‑publishing false narratives designed specifically to be ingested by AI systems. When journalists later queried 10 popular chatbots about those topics, roughly one‑third of responses repeated the propaganda. [21]
If hostile actors can systematically seed the informational environment that ChatGPT‑like systems learn from, polarization risks move from incidental to strategic.
Real‑world election warnings: from Dutch chatbots to Moldovan disinfo
These risks are no longer hypothetical. Around the world, regulators and election observers are documenting concrete problems.
- Netherlands: Ahead of the October 2025 Dutch election, the national data protection authority warned voters against relying on AI chatbots for voting advice. In tests, four chatbot platforms disproportionately recommended just two large parties — the far‑right Freedom Party and the Labour–Green Left alliance — despite 15 parties being represented in parliament, and failed to explain how recommendations were generated. [22]
- Moldova: Monitoring groups reported AI‑generated articles, spoofed news sites and synthetic videos amplifying pro‑Russian narratives and attacking the pro‑EU government ahead of parliamentary elections, part of broader efforts to destabilize the country’s European integration path. [23]
- India: In regional elections in Ghatshila, BJP candidate Babulal Soren publicly complained that AI‑generated misinformation and manipulated content were being used to damage his image, calling on the Election Commission to step in. [24]
- Ireland: During the presidential race that saw Catherine Connolly elected, deepfake videos falsely claimed she had withdrawn from the contest, mimicking the visual style of public broadcaster RTÉ. Officials warn such synthetic content can erode trust in both media and electoral authorities. [25]
- United States:
- OpenAI disclosed in 2024 that it had shut down a cluster of ChatGPT accounts linked to an Iranian influence operation generating articles and social posts about the U.S. election. [26]
- A political consultant behind AI‑generated robocalls impersonating Joe Biden before the New Hampshire primary was later hit with civil penalties and a nationwide injunction. [27]
These incidents don’t always involve ChatGPT directly, but they show how generative AI — including conversational systems — is already being used in ways that can inflame divides, distort competition between parties and undermine trust in institutions.
When “mostly accurate” is not enough: fact, belief and over‑reliance
A common response from AI developers is that modern chatbots are getting more accurate. That’s true — but it misses a crucial nuance.
A 2025 study in Nature Machine Intelligence tested 24 leading LLMs (including ChatGPT variants) on about 13,000 questions designed to distinguish facts, beliefs and knowledge. The models were highly accurate — over 90% — when simply asked to verify factual statements. But their performance dropped sharply when confronting false beliefs, especially when users expressed those beliefs in the first person. [28]
In other words, AI systems are pretty good at checking isolated facts, but much worse at recognizing when a person is wrong — and gently correcting them.
At the same time:
- A PNAS study on using LLMs for political fact‑checking found that AI explanations can help people catch false headlines more efficiently when the model is right. But when the model is wrong, users tend to over‑trust it, spreading misinformation more confidently. [29]
- Surveys show both the public and local U.S. officials are particularly worried about AI‑driven misinformation and political polarization, and many feel under‑prepared to evaluate AI outputs. [30]
Combine these dynamics and a worrying scenario emerges: a tool that sounds authoritative, is often correct on isolated facts, struggles to recognize when you’re operating from a false premise, and is empirically quite good at persuading you. In polarized contexts, that mix can quietly harden misperceptions inside each camp.
What OpenAI and regulators are trying to do about it
Platform‑level safeguards
OpenAI has introduced several policy and technical measures aimed at limiting misuse:
- Election policies and usage rules. The company has pledged to block tools that provide targeted political persuasion, limit the generation of campaign materials and provide links to official voting information. [31]
- Model behavior guidelines (“Model Spec”). OpenAI’s model specification emphasizes objectivity on political issues and the principle of “seeking the truth together,” instructing models not to adopt personal political positions and to surface multiple perspectives where appropriate. [32]
- Bias measurement and iterative improvement. As noted above, the company now publicly reports on political bias metrics and claims its latest GPT‑5 models show ~30% lower measured bias than GPT‑4o, with less bias surfacing in real‑world traffic. [33]
Critics counter that enforcement remains imperfect. Mozilla researchers, for example, found in early 2024 that ChatGPT could still be prompted to generate targeted campaign messaging despite OpenAI’s policies, highlighting gaps between written rules and real‑world behavior. [34]
Laws and regulations
Governments are moving, albeit unevenly:
- European Union
- The EU AI Act requires generative AI like ChatGPT to clearly label AI‑generated content, respect copyright and be designed to avoid illegal outputs; annexes and guidance address election‑related risks specifically. [35]
- Under the Digital Services Act, Brussels has pressed big platforms to hire native‑language fact‑checkers and label AI‑generated content ahead of EU elections, warning of possible fines for non‑compliance. [36]
- Spain’s proposed law on AI‑content labelling and fines is one of the first national implementations of these rules. [37]
- United States
- The Federal Election Commission chose not to write new AI‑specific rules for campaign ads in 2024, but issued an interpretive rule clarifying that existing bans on “fraudulent misrepresentation” apply to AI deepfakes. [38]
- States are moving faster than Washington: at least dozens have proposed or passed laws requiring labels on AI‑generated political content, restricting deceptive deepfakes, or creating task forces on AI and elections. [39]
- Global guidance
- International IDEA and UN agencies are issuing playbooks for election bodies on how to use tools like ChatGPT for legitimate tasks (like voter education) while guarding against misuse in campaign propaganda and disinformation. [40]
Regulation alone won’t resolve polarization, but it can raise the cost of manipulative uses and set expectations for transparency.
Can ChatGPT also reduce polarization?
The story is not all doom. A growing line of research suggests that, if carefully designed and governed, AI assistants can support healthier political dialogue.
- A Duke University team built an AI mediator that helped people on opposite sides of gun‑control debates find areas of agreement and reduced negative emotions compared to unmediated discussions. [41]
- Another project, “DepolarizingGPT,” offers users three answers to each political question — one left‑leaning, one right‑leaning and one explicitly depolarizing — to surface common ground and encourage perspective‑taking. [42]
- In 2025, researchers showed that an LLM‑powered “DebunkBot” running on GPT‑4 could reduce belief in conspiracy theories by about 20% on average after a single conversation, suggesting chatbots can help debunk misinformation when used intentionally. [43]
- A 2025 study titled “Can Large Language Models Effectively Mitigate Political Polarization?” explored using LLMs to rewrite social‑media posts in less polarizing language, finding that re‑ranked feeds based on such rewrites can lower affective polarization in experimental settings. [44]
All of this points toward a key insight: the same capabilities that make ChatGPT potentially polarizing — scale, personalization, rhetorical skill — can also be harnessed to cool temperatures rather than inflame them. The outcome depends on design choices, incentives and governance.
Practical steps for users, platforms and policymakers
Because polarization is a societal phenomenon, there’s no single “fix” inside ChatGPT’s code. But multiple actors can reduce the risks.
For everyday users
Without turning this into advice for any specific group, there are broadly applicable practices that researchers and fact‑checkers recommend:
- Treat ChatGPT as a guide, not a final authority. Use it to generate questions, find angles and summarize sources — then click through to verify claims with primary documents and diverse outlets.
- Ask for multiple perspectives. Prompts like “summarize how different political camps view this issue” or “give me arguments both for and against, with sources” can help counter one‑sided framing.
- Beware emotionally charged answers. If a response feels unusually outraged, moralizing or celebratory about “your side”, take that as a signal to slow down and cross‑check.
- Learn to spot synthetic content. Even when ChatGPT labels AI‑generated text, similar styles are now widely used across the web. Media‑literacy tips (reverse image search, checking outlets, looking for disclosures) still apply.
For newsrooms and civil society
News organizations and NGOs are increasingly experimenting with AI in ways that can either blunt or sharpen divides:
- Transparent use of AI in reporting. A 2025 analysis from Columbia’s Knight Institute found little evidence that mainstream outlets are using AI to fabricate political news, but stressed the importance of clear labelling and continued human editorial control. [45]
- AI‑assisted fact‑checking — with guardrails. Studies suggest that LLMs can speed up verification workflows, but also that fact‑checkers should be trained not to over‑trust fluent but wrong answers. [46]
- Public education campaigns. Election bodies in multiple countries are now explicitly teaching voters about AI deepfakes, chatbots and synthetic content as part of civic‑education efforts. [47]
For policymakers and regulators
Policy debates about AI and democracy are just beginning, but several themes are emerging:
- Target deceptive uses first. Survey experiments show that voters distinguish between benign uses of AI in campaigning and deceptive practices like impersonations or fake videos — and support strong restrictions on the latter. [48]
- Require transparency and labelling. From the EU AI Act to national laws like Spain’s, there is growing momentum behind mandatory AI‑content labels and clear disclosures when interacting with AI instead of humans. [49]
- Support independent auditing. External monitoring of tools like ChatGPT for political bias, safety and disinformation resilience can complement companies’ own evaluations and help maintain public trust. [50]
The bottom line: high stakes for trust — and for pluralism
ChatGPT and similar systems are not omnipotent puppet‑masters of democracy. Many early fears about AI totally “hijacking” elections have not (yet) materialized at scale, and some uses — like debunking conspiracy theories or mediating cross‑party conversation — may even help reduce polarization. [51]
But taken together, the latest research and real‑world cases show genuine risks:
- Political bias in ChatGPT is measurable, non‑zero and sensitive to prompt wording, model updates and fine‑tuning. [52]
- AI chatbots are surprisingly good at persuading people on contested issues, even without detailed micro‑targeting data. [53]
- They can unintentionally reinforce false beliefs and amplify emotional framings — especially when users already see them as neutral arbiters of truth. [54]
- Malicious actors are actively probing how to game or poison AI systems that millions rely on for political information. [55]
In that sense, “chatgpt-and-the-risks-of-deepening-political-polarization-and-divides” isn’t just a catchy slug — it captures a genuine governance challenge. These tools are becoming part of the public sphere. Whether they end up widening our divides or helping bridge them depends on thousands of design decisions, policy choices and individual habits made today.
References
1. www.pewresearch.org, 2. www.thesun.ie, 3. unric.org, 4. arxiv.org, 5. apnews.com, 6. www.thesun.ie, 7. www.europarl.europa.eu, 8. www.reuters.com, 9. link.springer.com, 10. www.sciencedirect.com, 11. rais.education, 12. pmc.ncbi.nlm.nih.gov, 13. aclanthology.org, 14. news.mit.edu, 15. arxiv.org, 16. openai.com, 17. www.washingtonpost.com, 18. www.nature.com, 19. www.washington.edu, 20. www.sciencedirect.com, 21. www.washingtonpost.com, 22. www.reuters.com, 23. apnews.com, 24. timesofindia.indiatimes.com, 25. www.thesun.ie, 26. techcrunch.com, 27. apnews.com, 28. www.natureasia.com, 29. www.pnas.org, 30. pmc.ncbi.nlm.nih.gov, 31. openai.com, 32. model-spec.openai.com, 33. openai.com, 34. www.mozillafoundation.org, 35. www.europarl.europa.eu, 36. www.theguardian.com, 37. www.reuters.com, 38. www.fec.gov, 39. www.ncsl.org, 40. www.idea.int, 41. today.duke.edu, 42. depolarizinggpt.org, 43. www.axios.com, 44. dl.acm.org, 45. knightcolumbia.org, 46. www.pnas.org, 47. www.undp.org, 48. arxiv.org, 49. www.europarl.europa.eu, 50. arxiv.org, 51. knightcolumbia.org, 52. link.springer.com, 53. www.washingtonpost.com, 54. www.natureasia.com, 55. www.washingtonpost.com


