In January 2025, Meta, the parent company of Facebook, Instagram and Threads, announced it would stop factchecking on these platforms, with CEO Mark Zuckerberg declaring that “factcheckers have just been too politically biased.” (Booth, 2025) This announcement ignited global controversy and intensified the debate over who has the authority to define truth in online spaces.
Since 2016, Facebook has relied on professional factchecking partners to police mis- and disinformation (Understanding Meta’s Fact-Checking Programme, n.d.). Critics, however, have argued that this model is slow, lacks transparency, and reflects the biases of the factcheckers themselves.
On X (formerly Twitter) community-driven verification is used, where users can flag what they believe is dubious content, providing sources justifying their position. This shift represents more than just a change in content moderation strategy it reveals a deeper tension between expert authority and the voice of the masses.
Should we trust the crowd over experts? What happens when social validation is more influential than journalistic rigour?
Contents
The role of traditional media
Historically, subject specialists and editors have been the gatekeepers that hold the power to set the public opinion agenda by choosing which, of the multitude of available messages, are the ones that we see. Gatekeeping theory (Shoemaker & Vos, 2009) builds on the work of Lewin (1947) and White (1950) to explain how information flows are controlled by designated individuals or institutions. Particularly in journalism, editors have acted as filters, shaping public knowledge by deciding what to publish.
This control over what information is made visible laid the foundation for how media can also influence the prioritisation of information, a process explored since the 1970s in agenda-setting theory. McCombs & Shaw (1972) argued that media doesn’t just tell us what to think about, but also how to think about it by focusing attention on certain issues, thereby influencing public perception and priorities.
News delivery via radio, television and newspapers illustrates gatekeeping and agenda-setting in practice. Editors have historically used both to manage the time and space limits in their bulletins. The advent of ‘new media’ – initially in the form of Web 1.0 – marked a shift in these dynamics. Spatial limitations on publishing were effectively eliminated, and time constraints became more about gaining and retaining our attention than the limits imposed by programming schedules. However, despite this expanded publishing capacity, content creation and curation largely remained in the hands of the gatekeepers.
Web 2.0 and the rise of user-contributed content
The volume of available space for public discourse was fundamentally changed by the decentralisation of information flows on social media platforms, which empower us to create, share, and amplify content on any subject imaginable and from any point of view.
In traditional media, professional journalism practices and editors are the gatekeepers that filter content before it reaches the public arena and our gaze (Shoemaker, 2009). But at the same time as this explosion in the volume of available content, the platforms demonstrated a reluctance to take on this gatekeeper role, citing freedom of speech and their protection in law as platforms rather than publishers (Fraley, 2023). While gatekeepers limit access to information before publication, the concept of gatewatching describes how, in the absence of gatekeepers, users collectively observe, highlight, and circulate content that has already entered the public domain (Bruns, 2005). Wikipedia’s open-edit model exemplifies gatewatching in action. While this collective management of information can enhance transparency and inclusivity, it also sometimes results in “edit wars” – where a single article is edited and re-edited repeatedly by to emphasise opposing views. Figure 1 shows only a small set of such controversial topics. These conflicts highlight the challenges of maintaining accuracy and neutrality in a decentralised environment and raise questions about how we identify and resolve conflicting claims online.
Wikipedia’s Lamest Edit Wars

Identifying unreliable and false information
Our ability as information consumers to discern misinformation is questionable. According to Schwalbe et al. (2024) it is shaped more by cognitive style and belief systems than by a consistent commitment to accuracy. Their study found that individuals who view truth as politically constructed, or who rely more on intuition than analysis, tend to show overconfidence in their judgments and are less aware of their own susceptibility to misinformation.
We also have a long mistrust of mass media (FUPress, 2013) , with accusations and counteraccusations of bias (Kang, 2024). According to Newman et al. (2024, p. 34), “…across the world, much of the public does not trust most news most of the time”. This erosion of trust has been exacerbated by the rise of social media platforms as news distribution hubs, where the rapid spread of opinion as fact coupled with sensationalised headlines often overshadows factual content (Friggeri et al., 2014). Consequently, while we expect truth, our consumption patterns suggest that what we actually want is content that reinforces our views, which, in turn, can lead to the reinforcement of misinformation and the perpetuation of echo chambers (Sunstein, 2001).
Another challenge we face to identify what is true is the illusory truth effect, which states that repeated exposure to a claim increases its perceived truthfulness—even when the claim is false. Lewandowsky, Ecker, and Cook (2017) explain that repetition makes a statement easier for our brains to process, with the result that we are more likely to accept a claim we have seen more frequently, as true. On social media, this dynamic is amplified, with the result that content that is widely shared or promoted by engagement-driven algorithms is more likely to be accepted as factual, even if it has not been verified.
How do social media platforms assist consumers in identify reliable and factual information?
After-publication content moderation
Despite the reluctance of platform owners to engage in gatekeeping, moderation of content after publication takes place on all social network platforms (Gillespie, 2018). Wikipedia describes content moderation as “the process of detecting contributions that are irrelevant, obscene, illegal, harmful, or insulting.”, and to “…remove or apply a warning label to problematic content or allow users to block and filter content themselves.”.
One high-profile example of such moderation cited by Gillespie (2018, p. 1) occurred in 2016, when Facebook removed the iconic Napalm Girl photo from a Norwegian journalist’s post for violating nudity policies, triggering widespread criticism and accusations of censorship. The platform later reversed its decision, acknowledging the need to contextualise moderation decisions. Early moderation systems tried to manage the overwhelming scale of user-generated content, but incidents such as this highlighted the lack of transparency and accountability in platform governance. As disinformation became more prominent, especially around elections and public health crises, Meta began to form partnerships with factchecking organisations in 2016 as a means of restoring credibility and controlling the spread of false information, while Twitter didn’t introduce its form of crowdsourced verification called Birdwatch until 2021.
Crowdsourcing moderation
Community-driven factchecking models offer a more democratic information environment but risk reinforcing divisions. Research into group polarisation shows that when like-minded individuals discuss issues, their views often become more extreme (Sunstein, 2002). On social media, where we self-select into communities, systems like Community Notes may amplify pre-existing biases rather than correct them.
The effectiveness of collective editing is further compromised in polarised environments, where competing perspectives tend to reinforce, not balance, each other. Algorithms, which prioritise engagement over accuracy, exacerbate this, with emotionally charged content spreading faster than corrections (Vosoughi et al., 2018; Jasser et al., 2022). In such an environment, even community-driven factchecking struggles to keep up with misinformation.
Barriers to participation in moderation may also skew the process, with politically motivated users more likely to propose or rate notes, while more moderate users may be more reluctant to get involved. This limits the perceived neutrality of community-based factchecking.
X’s Community Notes requires a note to be flagged as ‘useful’ before it gains visibility in an attempt to prevent one side’s perspective from dominating and early research suggests that notes achieving visibility tend to be more balanced and nuanced than typical comment threads (Chuai et al., 2024).
How effective is Community Notes as a moderating tool? In 2024 Maldita.es, a Spanish factchecking organisation, analysed over 1.17 million Community Notes from X and found that factcheckers were the third most-cited source in proposed notes after other threads on X and Wikipedia (Maldita.es, 2024). They also state that notes citing verified factchecking organisations (those verified by either the European Fact-Checking Standards Network (EFCSN) or the International Fact-Checking Network (IFCN)) were proposed more quickly and were more likely to become visible. On X, a note will only become visible if another user marks it as ‘useful’.
However, the chances that a proposed correction is both timely and visible are slim. Only 8.3% of all proposed notes on X ultimately become visible, rising to 12% for notes citing fact-checkers and 15.2% when the source is a European fact-checker.
Facebook’s proposed adoption of a Community Notes model similar to that already in use on X reflects a demonstrable reluctance on the part of social media platforms to take a role in moderating content on their platforms, in addition to their previously discussed refusal to gatekeep. In this environment, the burden of assessing accuracy increasingly defaults to crowdsourced verification.
Ultimately, the success of these crowdsourced moderation tools depends not only on design but also on cultural factors. A healthy truth ecosystem requires the recognition that no group has a monopoly on truth and a willingness on the part of contributors to engage in good faith. Without these, even well-designed moderation tools may become battlegrounds for information warfare rather than spaces for collective verification.
Visibility in the age of algorithms
As we have shown, social media platforms have refused the role of gatekeeper and are delegating moderation to users, yet it is on these platforms that the agenda of public discourse is being set. Powerful algorithms prioritise what we see first, which voices are amplified, and which topics trend. Have these algorithms become proxy agenda-setters?
Unlike traditional editors, algorithms influence agenda-setting by prioritising content based on engagement metrics, not editorial judgement or veracity. As Tufekci (2015) notes, algorithmic filtering introduces a layer of invisible curation that can distort the informational landscape, reinforcing filter bubbles and limiting exposure to diverse perspectives. This raises questions about accountability: if algorithms are shaping our information diets, who holds them responsible?
Rebuilding trust
In the age of community-driven content moderation, where professional factchecking no longer holds a central role and algorithms shape what is seen, shared, and ignored, we are both witness and contributor to the elevation of collective judgement over expertise and the revaluation of truth itself. As algorithmic systems increasingly set the agenda without transparency or oversight, the question of accountability becomes pressing.
Who is responsible when engagement-driven curation contributes to the distortion of public understanding? The challenge is no longer simply to verify facts – though that challenge remains unmet – but to equip us as information consumers with both the intellectual and technical tools to make discerning choices about our online information consumption and subsequent interactions.
The future of information online should not rest solely on the wisdom of crowds or the authority of experts, but on finding an equilibrium – one that balances openness with responsibility, visibility with credibility, and speed with scrutiny.
Factchecking on Meta platforms was disabled in the US on 7 April 2025.
References
Booth, R. (2025, January 7). Meta to get rid of factcheckers and recommend more political content. The Guardian. https://www.theguardian.com/technology/2025/jan/07/meta-facebook-instagram-threads-mark-zuckerberg-remove-fact-checkers-recommend-political-content
Bruns, A. (2005). Gatewatching: Collaborative Online News Production. P. Lang.
Chuai, Y., Tian, H., Pröllochs, N., & Lenzini, G. (2024). Did the Roll-Out of Community Notes Reduce Engagement With Misinformation on X/Twitter? Proceedings of the ACM on Human-Computer Interaction, 8(CSCW2), 1–52. https://doi.org/10.1145/3686967
Fraley, J. (2023). Social Media Platforms as Publishers: Evaluating the First Amendment Basis for Content Moderation. Princeton Legal Journal Forum, 3.
Friggeri, A., Adamic, L., Eckles, D., & Cheng, J. (2014). Rumor Cascades. Proceedings of the International AAAI Conference on Web and Social Media, 8(1), Article 1. https://doi.org/10.1609/icwsm.v8i1.14559
FUPress. (2013, January 11). Age-Old Media Bias. Fordham University Press. https://www.fordhampress.com/2013/01/11/age-old-media-bias/
Gao, Y., Zhang, M. M., & Rui, H. (2024). Can Crowdchecking Curb Misinformation? Evidence from Community Notes. SSRN. https://doi.org/10.2139/ssrn.4992470
Gillespie, T. (2018). Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media. Yale University Press.
Jasser, J., Garibay, I., Scheinert, S., & Mantzaris, A. V. (2022). Controversial information spreads faster and further than non-controversial information in Reddit. Journal of Computational Social Science, 5(1), 111–122. https://doi.org/10.1007/s42001-021-00121-z
Kang, J. C. (2024). How Biased Is the Media, Really? The New Yorker. https://www.newyorker.com/news/fault-lines/how-biased-is-the-media-really
Lewandowsky, S., Ecker, U. K. H., & Cook, J. (2017). Beyond misinformation: Understanding and coping with the “post-truth” era. Journal of Applied Research in Memory and Cognition, 6(4), 353–369. https://doi.org/10.1016/j.jarmac.2017.07.008
Lewin, K. (1947). Frontiers in Group Dynamics: II. Channels of Group Life; Social Planning and Action Research. Human Relations, 1(2), 143–153. https://doi.org/10.1177/001872674700100201
Maldita.es. (2025, April 24). What happens if you “get rid” of fact-checking? Medium. https://medium.com/@maldita.es/what-happens-when-you-get-rid-of-fact-checking-4b6a7c7dce8a
McCandless, D. (n.d.). Wikipedia’s lamest edit wars [Infographic]. Information is Beautiful. https://informationisbeautiful.net/visualizations/wikipedia-lamest-edit-wars/
McCombs, M. E., & Shaw, D. L. (1972). The Agenda-Setting Function of Mass Media. The Public Opinion Quarterly, 36(2), 176–187. JSTOR. http://www.jstor.org/stable/2747787
Newman, N., Fletcher, R., Robertson, C. T., Arguedas, A. R., & Niel;sen, R. K. (2024). Public perspectives on trust in news (p. 168). Reuters Institute. https://reutersinstitute.politics.ox.ac.uk/digital-news-report/2024/public-perspectives-trust-news
Schwalbe, M. C., Joseff, K., Woolley, S., & Cohen, G. L. (2024). When politics trumps truth: Political concordance versus veracity as a determinant of believing, sharing, and recalling the news. Journal of Experimental Psychology: General, 153(10), 2524–2551. https://doi.org/10.1037/xge0001650
Shoemaker, P. J., & Vos, T. (2009). Gatekeeping Theory (1st edition). Routledge.
Sunstein, C. (2001). Echo Chambers: Bush v. Gore, Impeachment, and Beyond. Princeton University Press.
Sunstein C. R. (2002) The Law of Group Polarization. Journal of Political Philosophy 10(2): 175–195. https://doi.org/10.1111/1467-9760.00148
Tufekci, Z. (2015). Algorithmic Harms Beyond Facebook and Google: Emergent Challenges of Computational Agency. Colorado Technology Law Journal, 13, 203. https://scholar.law.colorado.edu/ctlj/vol13/iss2/4
Understanding Meta’s fact-checking programme. (n.d.). Meta for Government and Nonprofits. Retrieved 24 April 2025, from https://en-gb.facebook.com/government-nonprofits/blog/misinformation-resources
Vosoughi, S., Roy, D., & Aral, S. (2018). The spread of true and false news online. Science, 359(6380), 1146–1151. https://doi.org/10.1126/science.aap9559
White, D. M. (1950). The “Gate Keeper”: A Case Study in the Selection of News. Journalism Quarterly, 27(4), 383–390. https://doi.org/10.1177/107769905002700403