Consultation For Your Reputation

Concerns Around the End of Fact Checking on Social Media

On Behalf of | Mar 15, 2025 | Firm News

Meta, the company that owns Facebook, Instagram, and WhatsApp, recently announced that it will stop using third-party fact checkers on all of their platforms. Mark Zuckerberg, Meta’s Chief Executive believes the fact checking systems that were put in place in 2016 have made too many mistakes, leading to an infringement upon free speech. However, this change is leading to increased fears surrounding the dangers of disinformation which has been rampant on social media in recent years.

Due to considerable public pressure, Meta began using outside organizations such as The Associated Press, ABC News, and Snopes, along with other global organizations, to comb through potentially false or misleading posts. These organizations could then rule whether or not posts needed to be annotated or removed. Nearly 100 organizations working in more than 60 languages globally were a part of Meta’s fact-checking program.

From 2016-2024, Meta spent billions of dollars to fix content moderation issues, but Zuckerberg reports that he grew frustrated as an increasing number of people voiced their complaints about the fact-checking program, often saying it was politically biased.

Switch To Community Notes

Meta’s policy change around fact checking is widely seen as a move to align the company with the new Trump administration, which places a high value on free speech. Instead of using outside fact-checking organizations, the company is moving to “community notes”, a program first introduced on X. Over the next several months, Meta will make the switch, beginning in the U.S. and, in time, possibly extending to other countries as well.

Community notes rely on crowdsourcing to flag potentially misleading or false content. On X, the program uses breakout boxes that appear beneath a post that has been flagged. However, the note only appears after users reach a majority consensus on its accuracy. The note includes a correction of the post and often a link to an online source that supports the fact check. It’s not yet clear if Meta’s community notes will work in the same exact way.

Even with this switch to community notes, Meta says it does plan to continue to moderate content related to drugs, terrorism, child exploitation, as well as frauds and scams.

Concerns Around The End Of Fact-Checking On Social Media

According to some experts, Meta’s fact checking program wasn’t perfect, but it was effective in labeling and stopping many sources of disinformation. The digital watchdog organization, Accountable Tech, is especially concerned about the end of fact checking on Meta’s platforms as they believe it will reopen the floodgates to a surge of hate, disinformation, and conspiracy theories, and may also lead to real-world violence.

Academic research found that the fact checking program was effective in countering vaccine misinformation, encouraging users to retract false or misleading posts, and improving users’ ability to identify misleading content.

Furthermore, experts report that X’s community notes process is slow, allowing misinformation to spread while the notes are written and debated. They also say they’ve seen “massive wars” occurring within the notes. Some false or misleading posts might go unchecked due the failure to reach a consensus in the notes. As a result, these experts believe community notes are much less effective in putting a stop to disinformation.

With the loss of third-party fact checking, more false information may be spread on Meta platforms, but the real consequences remain to be seen.