Speaking at CES 2025 in Las Vegas, X CEO Linda Iaccarino praised Meta and Mark Zuckerberg for the company’s recent policy changes that eliminate fact-checking and loosen moderation on Facebook, Instagram, and Threads. Both Iaccarino and X owner Elon Musk have been vocal advocates for the fact-checking feature based on user notes that is now live on their platform. The X executive believes Meta’s desire to follow suit is the right one.
“I think it’s really exciting when you think about the benefits that community notes can bring to the world. It’s really cool to see Mark and Meta understand that. Mark, Meta, welcome to the party,” X’s CEO said at CES 2025.
However, experts have criticized the platform’s approach to combating misinformation as flawed. For instance, a report last year by disinformation researchers at the US Center for Combating Digital Hate (CCDH) noted that many misleading posts, including famous posts by Musk himself, can gain billions of views without receiving any corrections. While some observers see user-driven fact-checking as a step toward more open dialogue, others point out that relying solely on the community may overlook subtle or complex misinformation. In turn, this can allow misleading narratives to circulate far and wide before any effective action is taken. Nonetheless, supporters of the approach argue that a broader community effort can diversify perspectives and encourage a more engaged user base.
We’ll keep you updated as these discussions continue, especially since X has been quite vocal about emphasizing user participation in moderation. From a neutral standpoint, it remains to be seen how much the new community-based system can truly counter the spread of misleading information on social media platforms. Critics maintain that stricter oversight is necessary, while advocates insist that decentralized fact-checking can be just as effective, if not more so, than traditional methods.
Meta’s Shift Away From Fact-Checking
On January 7, Mark Zuckerberg’s Meta, which owns the super-popular social networks Facebook, Instagram, and Threads, announced its desire to restore free speech on its platforms. As part of this effort, the platforms will radically change their strict approach to content moderation, which has often affected innocuous posts, and also end their existing fact-checking processes. Meta will no longer involve third-party fact-checkers and will instead rely on notes from users. Moderation using automated systems will now be used only for posts related to terrorism, child sexual exploitation, drugs, and fraud. To detect less serious violations, the company will now rely on complaints from users.
This new moderation policy reflects a broader industry debate on how social media platforms should balance free expression with the need to curb harmful content. Advocates of Meta’s changes claim that loosening the rules will encourage more robust and authentic discussions while also reducing the risk of over-censorship. At the same time, detractors worry that this shift could open the door to greater misinformation. They argue that, in a large and diverse user base, posts can travel quickly across the platform long before user complaints reach moderators.
Meta’s decision to join the “community notes” style of verification—somewhat similar to what is currently in place on X—has been interpreted by observers as an acknowledgement that user-driven oversight can play a significant role in identifying inaccurate claims. Yet, concerns persist about the effectiveness of such a system if the majority of users do not participate or if certain groups systematically misuse community notes to push a particular agenda, notes NIXsolutions. Supporters, meanwhile, emphasize that introducing more people into the moderation process gives the public a sense of ownership, reducing the perception that a single authority is making all the decisions.
Ultimately, many are curious about how smoothly Meta will transition to its new policy, and how the user-driven fact-checking framework will operate in practice. Skeptics see it as an uncertain path to managing misinformation, while enthusiasts see a promising experiment in crowdsourced moderation. Whether these changes will truly restore free speech and fairly police harmful content remains to be seen. Nevertheless, given the ongoing debates around social media responsibility, industry experts will continue to watch these developments closely—and we’ll keep you updated on any further shifts in approach.