
Democracy observers were dismayed in January when Meta – owner of Facebook, Instagram, WhatsApp and Threads – announced that it would be ditching its fact-checking system. The company had previously worked with third-party fact-checkers to detect misinformation circulating on its platforms. This content was not removed but had warnings added to it.
The system is now being replaced with “community notes”, where users of the platform monitor content voluntarily. Posts are flagged and a warning label appears once a majority consensus has been reached among the volunteers. However, it’s not yet clear how or whether these people will be vetted.
The “community notes” approach was pioneered by X (formerly Twitter) and research into their system by the Washington Post found that only around 10 per cent of proposed notes end up being shown. The notes were found to be effective at tackling misinformation in some cases, but the bigger concern is the rationale that Meta CEO Mark Zuckerberg used to justify the change. He accused fact-checkers of being politically biased and said the change was necessary to prioritise free speech. He also said he’d be moving the company’s content moderation teams out of California – a Democratic stronghold with a liberal reputation – to Texas, a majority Republican state “where there is less concern about the bias of our teams”.
The move looks like part of Meta’s attempts to appease President Donald Trump, who has previously accused social media platforms of censorship, threatening intervention. Trump himself said he believed the changes were in response to his warnings against the company.
With social media an increasingly important source of news and information, the announcement sparked serious concerns about the potential impact on public discourse and democracy – especially because it’s not just Meta making these concessions. Along with the community notes system, X has also reinstated previously banned accounts since Elon Musk took over in 2022, and research suggests that hate speech on the platform has significantly increased since then.
The consequences are not to be underestimated. We saw in the UK last summer how misinformation online can lead to real world violence, when false allegations on social media that an immigrant was responsible for the Southport attacks led to riots and anti-immigrant assaults. More broadly, it is impossible to hold the powerful to account without access to accurate information.
This article is a preview from New Humanist’s spring 2025 issue. Subscribe now.