Meta and TikTok’s Current Efforts to Curb Misinformation.

Misinformation is rife on platforms designed for sharing, instantaneity, and immediate engagement. Two major platforms, Meta and TikTok, have evolved their approaches to addressing misinformation online, differing in how they combine rule-based policies, labeling, distribution limits, and human review. In the United States, for instance, the company is shifting from third-party fact-checking on Facebook and Instagram to relying on Community Notes on Facebook, Instagram, and a new social networking app, Threads. Community Notes are user-flagged corrections to accurate articles labeled on the site as "Missing Fact Check Information" and are monitored by Meta staff. Misinformation on Meta remains controlled by the same "Integrity and Authenticity" policy area as before, and is reported on in the same enforcement reports (Meta, 2025; Pegum, 2024).

TikTok is taking a more labeling and removal-oriented approach to misinformation. Content that breaks the platform’s rules on misinformation, including during electoral periods, may be removed, silenced from the “For You” page, or have recommendations for it limited. The company also has specialized moderation teams focused on civic issues. Facebook does more than just label misinformation, but it now reveals data about enforcement of that policy and the reach of potentially tricky content in just one section of its site (Transparency Center, n.d.). In a January 2025 post, it announced that it would stop relying on third-party fact checkers to flag misleading posts in the U.S. and turn instead to users for what it's calling "Community Notes," which essentially offer context to users on specific points of contention in posts (Kaplan, 2025). Meta said it's making the change because the company received feedback that third parties were presenting a biased selection of fact checks, and the new approach would have users write the notes and rate each other's contributions. Meta has built a global fact-checking network of over 90 companies in more than 60 languages, and has invested $150 Million + in the effort since 2016 (Meta, 2025; Meta, 2024).

The system has some positive aspects. While deletion is only one part of the system, context labels, warnings, and distribution limits are also used and often more effective to tackle misinformation, as content that gets shared often does not just disappear after it gets removed. I appreciate Meta taking steps to highlight sensitive content and making sure users know when information has been rated as misleading. I also appreciate the number of instances where content has been removed for violating Meta Community Standards and additional context has been provided. However, by choosing to use crowdsourced Community Notes instead of expert review, I think Meta is making a trade-off in response to critics. While Community Notes can be useful for providing context to users, it is often slower than relying on expert review for flagging potentially misleading content – especially during periods such as elections or public health crises where false information can rapidly spread. I think the best approach would be for Meta to continue using expert review for immediate, high-harm content, and then use Community Notes to provide additional context to users after the initial wave of sharing (Meta, 2025; Transparency Center, n.d.).)

TikTok is taking a more hands-on approach right now. In a December 2024 post on the app, TikTok explained how it is keeping users safe from harmful misinformation by restricting unverified content in the For You feed and working with 20 fact-checking organizations around the world. Additionally, the company said it includes clear labels on unverified content, connects users with reliable sources of information, and has a team of human moderators and technology that review content to enforce its community guidelines. Also positive: 98% of the misinformation that violated its guidelines was removed proactively, according to the app's most recent EU transparency report. Automated moderation tools account for 80% of the violative content that was removed, according to TikTok’s EU report. The company also noted its “election hubs” and “information centers” that nudge users towards reliable sources of voting information as well as fact-checked content (TikTok, 2024).

By limiting the spread of recommended content, TikTok can potentially stop problematic information from going viral. This is especially important given the nature of TikTok's feed algorithm, which can quickly propel content to a massive audience. So far, TikTok seems to be taking a robust approach to moderation paired with contextual features that users can see on the platform. It performs best during times of high public awareness, such as during elections and natural disasters, and I'd like to see it maintain that standard. One of the biggest issues with TikTok's approach is that it can give users clear labels and prompts for why certain content has been removed or marked as dubious, but fails to contextualize the reasons behind those actions. Users may not have a complete understanding of what constitutes removed or restricted content, or how to appeal those decisions (Pegum, 2024).

To better address concerns around moderation, TikTok could publicly share more examples of content that falls in the gray area, such as that which contains hate speech but also has some value to share information about a topic. Additionally, it would be helpful for the platform to provide more transparency into its moderation practices in local languages, especially where user bases are large enough to warrant it. Finally, TikTok should more clearly label content that has not been verified by the user before it is shared. Both Meta and TikTok are moving away from a purely punitive approach to tackling misinformation towards a model that introduces context and friction into users’ experience. This is a step in the right direction. The most effective approach to tackling misinformation will combine sophisticated expert review, clear, transparent rules, fast takeover for high-risk content, and sufficient explanation to users of decisions. I would advise Meta not to switch too quickly to a fully friction-based approach and for TikTok to extend its approach beyond election season. Misinformation is not only a moderation problem, but it is also a design problem. Platforms that slow spread, provide context, and show their work will do the most to mitigate harm.