When Fact-Checking Becomes a Forum: Meta’s Shift to ‘Community Notes’ Under the EU’s Digital Regulatory Framework
Introduction:
Article 10 of the European Convention on Human Rights (“ECHR”) asserts that “everyone has the right to freedom of expression. This right shall include freedom to hold opinions and to receive and impart information and ideas without interference by public authority and regardless of frontiers.”1 Meta’s recent decision to abandon the use of third-party fact-checkers on its platforms in favour of a ‘community notes’ system raises significant concerns regarding platform responsibility, misinformation management, and regulatory oversight. While the current changes only apply to the US jurisdiction, with “no immediate plans” to abandon fact-checking in the UK or EU,2 the provisions heavily conflict with EU frameworks, particularly the Digital Services Act (“DSA”)3 and the EU Code of Practice on Disinformation 2022.4 While Article 10 of the ECHR guarantees the right to freedom of expression, including the freedom to receive and impart information without interference, it also permits restrictions that are prescribed by law, necessary in a democratic society, and are necessary for national security, public safety, or to protect the reputations and rights of others.5 Greater abuses of Article 10 are being documented with the dawn of the digital age, especially considering the abundance of information that is circulated across social media platforms daily. Scholars have labelled the platforms a “double-edged sword”, accounting for the advantages they offer, such as unlimited and easy communication globally, while also recognising their danger in spreading “fake news”.6 In theory, Meta’s proposal of community-based moderation promotes participatory governance and avoids undue censorship, effectively aligning with the provisions of Article 10. However, the removal of disinterested third parties raises legitimate concerns as to the impartiality of the current fact-checking regime, which is overseen by social media users who notably lack expertise. Furthermore, Meta, as a large online platform, cannot absolve itself of the responsibility to ensure safety of users. The provision for community notes does not adequately protect against the dangers of hate speech or the widespread dissemination of misinformation and disinformation.
Article 10 and the Protection of Freedom of Expression:
Freedom of speech holds significant social and political importance in the United States, with it being enshrined under the First Amendment of the US Constitution.7 The First Amendment is interpreted broadly by the US courts to include a strong presumption against censorship, especially involving sentiments which include strong political and public interest.8 In a jurisdiction that upholds the importance of democracy, the profession of opinions and value judgments must be protected. The unjust monitoring and removal of vast amounts of information on social media raises concerns regarding platform censorship. However, there remains a duty on platforms to ensure the safety of users, which includes the regulation of misinformation, disinformation, hate speech, and incitement of violence. Thus, there exists a tentative balancing act which international legislative bodies must engage in to ensure adequate freedom and online safety for individuals. With the recent inauguration of President Trump, there has been a growing alt-right sentiment within US communities, exhibiting a decrease in trust in online platforms. Trump and his abundant followers have previously criticised Meta’s fact-checking provisions, calling it “censorship of right-wing voices”.9 It appears that the growing concerns for safeguarding free speech policies originate from right-wing communities, with the left in comparison calling for increased regulation. In 2016, a new social network called “Gab” was created as an alternative to Twitter, fronting the headline “people and free speech first”.10 The platform welcomed those banned or suspended from other platforms without any consideration as to why the individuals had been banned in the first instance. However, in an extensive numerical-based study conducted by a range of experts, it was found that Gab was most prominently used for the dissemination of ‘news’ circulated by alt-right users and conspiracy theorists.11 Interestingly, a peripheral platform created to lack fact-based oversight was predominantly used by right-wing individuals, evidencing that their original claims were likely banned for disseminating misinformation.
While constitutional rights tend to be of significant importance, it is largely accepted that they are not absolute and are subject to limitations when necessary. It is apparent that freedom of expression is exceedingly important in a free and fair democracy, and indeed in a nation that values the varied judgments and opinions of its citizens. However, the First Amendment protection on freedom of speech, and indeed Article 10 of the ECHR, cannot be used as tools to profess hate speech or to incite violence. Given the global reach of online platforms and the varied temperaments of individuals who use them, it is perfectly reasonable to assert that an adequate level of professional third-party oversight is implemented. While Meta may be of the view that it bears no responsibility for the opinions and judgements that users circulate, whether dangerous or not. It cannot deny its status as an intermediary which provides a platform for the information, meaning it must be accountable to some degree for the consequences of the information being circulated.12 It is argued that Meta’s provision for ‘community notes’ does not discharge this burden, but rather, alternatively places the burden on prudent users in a form of participatory governance. This policy can be likened to the entire abolition of police forces with a view to empowering citizens to perform arrests wherever they see fit. Should Meta have added the feature of ‘community notes’ in addition to the previous third-party fact-checking system, this could have achieved the participatory and engagement benefits desired without sacrificing user safety and platform integrity. However, the absolute replacement of professional fact-checkers raises concerns regarding regulation, which are too great to ignore. It must be noted that the protection of Article 10 generally extends to not only information that is favourably received, but also to information that offends, shocks, or disturbs the State or any sector of the population, as to hold otherwise would not be ensuring the freedom of expression.13 However, the right is not absolute, and any legitimate interference or limitation must meet a stringent three-part test where it is prescribed by law, pursues a legitimate aim, and is necessary in a democratic society.14 As will be outlined below, social media platforms host a plethora of assertions, with a considerable number of these stemming from misinformation. Additionally, it is not uncommon for professions of hate speech and incitements of violence to be posted across these platforms, both of which can be legitimately restricted under Article 10 for valid public safety concerns. It will be demonstrated that Meta’s provision of ‘community notes’ falls exceedingly short of this standard and does not adequately protect the safety of its users.
The Potential for Harm Following Removal of Fact Checkers:
The decision by Meta to remove professional third-party fact-checkers in preference for a community-driven ‘notes’ system raises profound concerns regarding the adequacy of self-regulation by social media platforms. While the change may be promoted as a way to democratise moderation, in the absence of impartial regulation, there is considerable potential for harmful and unlawful content to flourish. Predominantly, without legitimate oversight applying verified and transparent standards, platforms risk becoming mediums for misinformation and disinformation. This has already been an issue when considering the vast proliferation of misinformation spread across platforms regarding the COVID-19 pandemic. In a study that examined the first phase of the COVID-19 pandemic, where posts relating to COVID-19 were assessed for their reliability, up to 28.8% could be classified as misinformation.15 The consequences of the global spread of this medical misinformation cannot be taken lightly; in fact, one of the main recommendations suggested by the study was the official supervision of this content by social media platforms.16 Considering this worrying level of medical misinformation was reported during a period when fact-checking was still in place, it is likely that dangerous misinformation such as this has multiplied since the removal of the oversight. Considering Article 10 provides for the limitation of freedom of expression in the interest of “public safety” and “for the protection of health”, it would appear that there is an indisputable burden on platforms to prevent the circulation of this dangerous medical misinformation, and Meta’s provision of ‘community notes’ certainly does not meet this standard required.17
In addition to medical misinformation, social media sites routinely remain platforms for electoral disinformation and antidemocratic sentiment. False claims regarding the integrity of elections, vote-rigging, and ballot fraud all pose a serious threat to democratic legitimacy. This was evident during the 2020 US election, where the spread of disinformation fuelled the Capitol insurrection. In a 2022 study, it was confirmed that 96% of participants were active on platforms like Facebook and Instagram, where they shared and consumed misinformation about the 2020 election, leading to the eventual insurrection.18 Furthermore, Donald Trump's unfounded assertions of electoral fraud were cited as a key motivation for right-wing participation.19 This further demonstrates the danger of alt-right content being proliferated without regulation across platforms, especially considering the right-wing resistance to regulation of speech. Alt-right content, including xenophobic, white nationalist, or anti-Muslim sentiments, thrives in environments lacking strong moderation, as was seen with the emergence of the peripheral platform “Gab”.20 Fact-checkers are vital in providing credible, non-partisan corrections that can counteract these narratives before they gain traction. There is ostensible danger in allowing these narratives to proliferate in the absence of regulation. False stories linking immigrants to crime or terrorism that circulate unchecked have deplorable consequences on the safety of individuals. A 2021 study analysing anti-immigrant sentiment as expressed through language on Facebook highlighted the link between higher anti-immigrant attitudes and higher negative sentiment expressed on social media through the use of anger and swear words.21 Furthermore, a 2024 study demonstrated that online discussions about immigrants were predominantly negative, with frequent expressions of contempt, anger and fear.22 The study also documented common concerns involving economic competition and crime, reflecting unsubstantiated economic threats.23 This vast proliferation of anger and fearmongering can only harm the safety of immigrants, with recent research suggesting that “radical right-wing sentiments on social media may instigate and/or facilitate violent (anti-immigrant) political action.”24 Considering these risks were pertinent prior to Meta’s removal of third-party fact checking, there now exists an even greater risk of the physical expression of violence given the vast amount of unregulated content on Meta’s platforms that exhibit hateful sentiment. As expressed in Article 10(2) of the ECHR, the “prevention of disorder or crime” and in the interests of “public safety” the right to freedom of expression can be legitimately restricted,25 Meta’s self-exoneration of responsibility in regulating hateful content would appear in direct contravention to the safeguards provided by Article 10(2).
4. The Efficacy and Integrity of Community Notes Compared to Third-Party Fact Checkers
Meta’s decision to replace professional third-party fact-checkers with the ‘community notes’ system in the US raises legitimate concerns regarding the effectiveness, integrity, and potential consequences of such a shift. In comparing the ‘community notes’ system used in the US to the impartial fact-checking model used in Europe (by independent organisation, ‘Full Fact’),26 there is a plethora of risks associated with the community-based moderation model. Primarily, it must be recognised that Meta’s decision to remove third-party fact-checkers was motivated by self-benefit in the form of reduced business costs and not in the interest of user safety. The involvement of third-party fact-checking organisations like Full Fact comes at a financial cost, as Meta must compensate the organisation for its work. By abandoning this system in favour of a community-driven model, Meta reduces the cost of fact-checking as it no longer needs to pay an external organisation. While this may make their system more financially efficient, it is achieved at the cost of accuracy, reliability, and user safety. The previous model, which employed the use of fact-checking, increased the overall credibility and responsibility of the platform and ensured alignment with the relevant regulatory frameworks like the DSA.27
Furthermore, professional fact-checkers from organisations like Full Fact are trained in journalistic standards, critical thinking, and research methodologies. These experts typically have backgrounds in data analysis, law, and journalism, providing an informed and impartial approach to fact-checking. In contrast, the ‘community notes’ system relies on ordinary users to identify and annotate misleading content. While this may democratise the process, it raises legitimate concerns regarding the lack of specialised knowledge of users, especially in complex contexts such as medical misinformation (as was identified with a 28.8% misinformation rate regarding posts about the Covid-19 pandemic).28 There is an ostensible risk to user safety when considering misinformation regarding medical suggestions, which could be largely mitigated if Meta reinstates the use of professional fact-checkers. The organisation ‘Full Fact’ employs qualified specialists who can verify claims across various specialised fields, including politics, health, science, and law.29 Organisations like Full Fact follow strict, standardised, and transparent procedures for fact-checking.30 They operate under codes of ethics and are subject to external oversight. These procedures ensure robust transparency and accountability, and when errors are identified, they are publicly acknowledged and subsequently corrected.31 ‘Community notes’ lacks the rigorous editorial procedures associated with professional fact-checking. The process is not standardised nor regulated, considering it relies on user discretion to edit material. There is a conspicuous lack of oversight, with no centralised authority regulating the quality of annotations. Furthermore, the anonymity of contributors can further undermine the integrity of the editorial process through the disinhibition effect.32 The fact-checking organisation Full Fact takes a neutral stance when considering information posted online and seeks to assess its credibility as impartially as possible.33 Community notes, on the other hand, relies on the input of users, who may have political, ideological, or personal biases that influence their annotations. Users may reinforce their own biases by engaging only with content that aligns with their beliefs, leading to echo chambers where misleading content goes unchallenged.34
Analysis and Recommendations: Current Legislative Framework and Possible Future Regulatory Actions
While the current shift to a community-based regulation system only applies to Meta’s platforms in the US, the process of the ‘community notes’ system heavily conflicts with the EU’s digital regulatory framework and would likely be in breach of EU standards should the fact-checking process be abandoned in the EU also. The DSA, which has been in operation since 2022, is aimed at regulating digital platforms and enhancing the safety of online spaces.35 The DSA has been praised by scholars for instilling unity throughout EU Member States, which previously exhibited high levels of fragmentation across digital regulation.36 The DSA establishes requirements for online platforms to take greater responsibility for content moderation and the protection of users from harmful content.37 The DSA applies to all digital platforms (including Meta, “Very Large Online Platforms”) operating within the EU, and imposes obligations depending on the platform’s size, reach, and impact.38 Platforms are expected to take action to prevent the spread of disinformation by removing or reducing the visibility of the content. Furthermore, platforms must allow independent audits to assess their compliance with the regulation. One of the key recommendations resides in Article 35, which addresses the “mitigation of risks”; this section suggests the use of “content moderation processes”.39 The use of professional third-party fact-checkers would satisfy this requirement under the DSA. By removing the standardised, transparent, and impartial system of fact-checking, Meta risks falling exceedingly short of its legal obligations under the DSA. It is likely that if Meta attempts to remove third-party fact-checking in preference for a community-based moderation system within the EU, it will not be meeting the regulatory standards imposed by the DSA.
The EU Code of Practice on Disinformation was updated in 2022 to enhance its effectiveness in combating disinformation across EU platforms.40 Considering the Code of Practice is a soft law instrument and is therefore not legally binding, it offers a voluntary based, self-regulatory framework to encourage digital platforms to take greater responsibility for preventing the spread of false information. The Vice-Chair of the European Platform of Regulatory Authorities has stated that the Code of Practice aims to combat disinformation by assessing platform compliance with the DSA.41 One of the specific provisions of the Code of Practice involves “empowering the fact-checking community” by ensuring a more consistent use of fact-checking on online services across Member States, including the requirement of providing fair financial provision for fact-checkers’ work.42 Meta’s removal of independent fact-checkers represents an ostensible undermining of the provisions of the Code of Practice, by placing the importance of profit above legislative adherence and user safety. By shifting to a community-driven model which lacks standardised oversight of information posted, Meta has effectively reduced the role of experts, potentially undermining the trustworthiness of its platforms. Both the DSA and Code of Practice aim to strengthen the responsibility of online platforms in response to concerns regarding the spread of misinformation and disinformation. Meta’s shift from third-party fact-checking to a community-driven model conflicts with its obligations under both the DSA and Code of Practice. While the shift may offer the platform the benefits of reduced costs and increased user engagement, it risks compromising the accuracy, safety, transparency, and integrity of its platforms.
Conclusion:
Meta’s decision to abandon professional third-party fact-checking in preference for a ‘community notes’ driven system represents a regressive step motivated by self-profit. Without legitimate oversight applying verified and transparent standards, Meta’s platforms risk becoming mediums for misinformation, disinformation, hate speech, and incitement of violence. While Article 10 of the ECHR may provide for the “freedom of expression”, this right is not absolute. Any legitimate limitation must meet a stringent three-part test where it is prescribed by law, pursues a legitimate aim, and is necessary in a democratic society. This limitation allows for the safety of users on online platforms and ensures that incitements of violence do not accelerate unfettered. Professional third-party fact-checkers possess specialities for analysing and correcting misinformation, they utilise standard, transparent, and unbiased procedures which are regulated by a centralised authority. The community-driven system cannot adhere to these same standards in terms of standardisation and speciality. Thus, it is evident that the use of professional fact-checkers ensures greater safety for online users. While the community-driven model is currently only in operation in the US, the model starkly conflicts with the EU’s digital regulatory frameworks, and should Meta make the shift in the EU, the model will likely be in breach of the DSA and the Code of Practice 2022.
Footnotes:
1 Convention for the Protection of Human Rights and Fundamental Freedoms (European Convention on Human Rights, as amended) (ECHR) art 10(1).
2 Liv McMahon, Zoe Klainman and Courtney Subramanian, “Facebook and Instagram get rid of fact checkers” (7 January 2025) Technology, BBC News.
3 Regulation (EU) 2022/2065 of the European Parliament and of the Council of 19 October 2022 on a Single Market for Digital Services and amending Directive 2000/31/EC (Digital Services Act) [2022] OJ L277/1.
4 European Commission, Strengthened Code of Practice on Disinformation (2022).
5 Supra 1, art 10(2).
6 Esma Aimeur, Sabrine Amri and Gilles Brassard, “Fake news, disinformation and misinformation in social media: a review” (2023) 13 Social Network Analysis and Mining 30.
7 US Constitution amend I.
8 Thomas L. Tedford and Dale A. Herbeck, Freedom of Speech in the United States (New York: Random House 1985, updated Autumn 2024).
9 Supra 2.
10 Savvas Zannettou and Barry Bradlyn, “What is gab: A bastion of free speech or an alt-right echo chamber” (2018) Companion of Proceedings of the ‘The Web Conference 2018’ (p 1007 -1014).
11 Ibid.
12 Christope Geiger, Giancarlo Frosio and Elena Izyumenko, “Intermediary Liability ad Fundamental Rights” (2020) 21 Columbia Journal of European Law 49.
13 Toby Mendel, A Guide to the Interpretation and Meaning of Article 10 of the European Convention on Human Rights (Centre for Law and Democracy 2013).
14 Supra 1, art 10(2).
15 Elia Gabarron, “COVID-19-related misinformation on social media: a systematic review” (2021) 99 Bulletin of the World Health Organisation 455.
16 Ibid.
17 Supra 1, art 10(2).
18 Jian Wang, “The U.S. Capitol Riot: Examining the Rioters, Social Media, and Disinformation” (2022) Harvard University ProQuest Dissertations and Theses.
19 Ibid.
20 Supra 10.
21 Saifuddin Ahmed, “Social Media Use and Anti-Immigrant Attitudes: Evidence from a Survey and Automated Linguistic Analysis of Facebook Posts” (2021) 31 Asian Journal of Communication 276.
22 Saifuddin Ahmed, “Social Media and Anti-Immigrant Prejudice: A Multi-Method Analysis of the Role of Social Media Use, Threat Perceptions, and Cognitive Ability” (2024) 15 Frontiers in Psychology 1280366.
23 Ibid.
24 Anton Tornberg and Mattias Wahlstrom, “Unveiling the radical right online: Exploring framing and identity in an online anti-immigrant discussion group” (2018) Sociologisk forskning 267.
25 Supra 1, art 10(2).
26 Supra 2.
33 Supra 29.
34 Matteo Cinelli, “The echo chamber effect on social media” (2021) 118 Proceedings of the national academy of sciences e2023301118.
35 Supra 3.
36 Aina Turillazzi, “The digital services act: an analysis of its ethical, legal, and social implications” (2023) 15 Law, Innovations and Technology 83.
37 Supra 3.
38 Supra 3, Article 33.
39 Supra 3, Article 35.
40 Supra 4.
41 Ramsha Jahangir, “The EU’s Code of Practice on Disinformation is Now Part of the Digital Services Act. What Does It Mean?” (25 February 2025) Tech Policy Press.
42 Supra 4