Hey hey,

You have probably seen a post on Facebook flagged with a warning and just kept scrolling. Imagine having to make those calls billions of times a day.

Facebook relies on third-party fact-checking organizations now to review posts and add misinformation labels.

The program costs millions every year but manages to review very little of potentially false content on the platform.

Whenever a post gets flagged with a fact-check warning, user engagement on that post drops as people scroll past it.

On top of all that, Meta attends court hearings almost every quarter, where they face accusations that fact-checkers are biased and unfair.

Meanwhile, Twitter has a crowdsourced model that might work at scale, forcing Meta to decide whether to stick with professionals or take a risk on the crowd.

The Problem

Facebook’s moderation system is breaking down on three fronts.

  • First, it doesn’t scale. Fact-checkers cost hundreds of millions of dollars and still catch less than a fraction of false content.

  • Second, it’s politically toxic. Every time a post is flagged, one side of the political spectrum accuses Facebook of censorship, while the other side blames it for letting misinformation spread.

  • Third, it frustrates users. Content with warnings loses engagement, and many see the labels as proof of bias rather than a protective measure.

Put together, this means Meta is paying more for a system that angers politicians, alienates users, and fails to control misinformation at the speed it spreads.

At the same time, Twitter has introduced Community Notes, a crowdsourced system where users add context instead of relying on professionals.

And that is proving to be cheaper, faster, and less politically controversial.

Now, Facebook wants to adopt the same system.

Your Options

1. SHIP

Full switch to community notes

Facebook replaces professional fact-checkers with a crowdsourced system similar to Twitter’s Community Notes.

Users add context to posts, and algorithms decide which notes to display based on how helpful other users rate them.

This reduces costs and political pressure because moderation appears to be coming from “the community” instead of Meta.

The risk is scale without quality. If users abuse the system or form echo chambers, misinformation could spread unchecked.

2. WAITLIST

Hybrid model

Meta will work with fact-checkers in critical areas, such as health and elections, while seeing Community Notes for fast-moving viral posts and political content.

This balances accuracy with scale: fact-checkers bring credibility, while crowdsourcing keeps costs and speed under control. The trade-off is complexity.

Running two parallel systems is more expensive than crowdsourcing alone, and Meta could still face accusations of bias when fact-checkers step in.

3. SKIP

Double down on professionals

Meta expands its current fact-checking model by hiring more partners across various languages and investing in AI tools to speed up their work.

This strengthens accuracy and consistency but at a steep price: higher costs, slower response times, and constant political backlash.

Choosing this path means accepting that Facebook will always be in the spotlight.

Now, You Decide

JAPM’s Take

If we were sitting in the PM seat at Meta, we would ship Community Notes.

Here’s why:

  • No matter how many professionals you hire, you will never keep up with billions of posts across 100+ countries, but Community Notes grow with the platform.

  • Millions for a very little coverage is a broken equation. Crowdsourcing isn’t free, but the economics are night-and-day better.

  • Every fact-check today looks like censorship. With Community Notes, it becomes “your peers think this needs context.” Same outcome, less platform blame.

  • Viral misinformation moves in hours, not days. Notes can scale in near real-time where professionals simply can’t.

But we wouldn’t roll it out recklessly.

As a PM, we would treat this like a phased product bet:

  • Start with pilots in a few high-engagement but lower-risk markets.

  • Build strong safeguards against brigading and gaming.

  • Keep professionals in the loop for high-stakes domains such as health and elections until Notes prove themselves.

The trade-off is clear: some false content will slip through early.

But the upside is existential. Meta will step away from the censorship crossfire and into a system that scales.

In product terms, this is one of those moments where “perfect but unscalable” has to give way to “imperfect but resilient.”

At Meta’s scale, fact-checking is a survival problem. The real choice isn’t between good and bad options, it’s between what can scale and what can’t.

Community Notes may be imperfect, but it shifts blame, lowers costs, and moves faster than any professional team.

The question is: do you trust billions of users more than a few thousand experts?

Reply

Avatar

or to participate

Keep Reading