A story about common confusion
In December of 2025, Stanford researchers analyzed 2.2 billion social media posts looking for a pattern. They wanted to know what percentage of users posted severely toxic content. Not rudeness, not sarcasm, but speech that was so hateful that 90% of the world would flag it as being problematic.1
With this data in hand, they then asked thousands of people to answer a simple question:
Imagine walking into a bar with 100 people. Three of them are screaming about politics, about each other, about nothing. But the bouncer, who gets paid based on how long you stand there staring, has wired those three into the sound system and turned it up to ten.
You walk in, hear the roar, and conclude: this place is full of lunatics. Never hearing the 97 people having normal conversations a few feet away.
That's social media. The bouncer is an algorithm. And you have definitely been the bystander.
Pick a topic. Any topic. This is what your feed might look like:
Reading this feed, you might reasonably conclude that the country is split between unhinged extremes. It is not. And the gap between what Americans actually believe and what the feed suggests they believe may be the most consequential thing platforms are failing to show you.
Let's scale a hypothetical social media platform down to a single room with 100 people inside. This is what it looks like:
The room itself is largely quiet. Most people are sharing their own thoughts and posts. But an engagement-based ranking system (the bouncer wired into the sound system) amplified the loudest voices until they dominated your feed. This is largely the result of an extremophile algorithm trying to monetize the most salacious people in the room.
This pattern repeats across platforms. On Twitter/X, toxic tweets receive ~86% more retweets and ~27% more visibility than non-toxic ones, 0.3% of users shared 80% of all contested news,14 and just 6% of users produce roughly 73% of all political tweets.16 On TikTok, 25% of users produce 98% of all public videos.15 The specific numbers vary. The dynamic is the same: a small minority of highly active users overwhelms the majority.
After a time consuming content in this room, your brain performs a kind of ambient demography. The feed becomes a sort of census. You conclude — logically — that the behavior must be widespread. The room might just be full of extreme people! Maybe most people do believe these crazy things.
If this were just about tone of our social posts, it wouldn't matter very much. But this distortion ends up causing some seriously bad patterns of behavior.
Pattern 1 The Majority Goes SilentWhen the majority of people looks at the feed and assumes they're outnumbered, people will often self-censor.3 The dynamic replicates on social media17 — fear of social isolation suppresses opinion expression on platforms where it's perceived to be unwelcome. They go quiet, or they leave a platform entirely. They cede the space to users with more extreme politics.
Pattern 2 The Loud Minority Thinks It's the MajorityThe minority who aggressively post end up with their own distortion – believing they are part of the majority.5
A study of 17 extremist forums found the same pattern: the more someone posted, the more they believed the public agreed with them. More engaged participation bred false consensus.
Pattern 3 Everyone Gets Each Other WrongBoth sides develop wildly inaccurate beliefs about who the other side actually is.6 Try it yourself:
The distortion extends to policy beliefs. Step through to see the perception gap on the issue of immigration.
Elected officials are very good at sensing political sentiment. It's literally their job. (They are not elected to correct people's beliefs.)
Politicians who can build a coalition about a perceived belief are more likely to win. They position themselves against an opponent that doesn't exist, but their supporters think exists.
And remember: most of our politics now happens on social media. Candidates often read the same distorted feed. They are unlikely to change their minds.
The Overton window shifts. Not because opinion changed, but because perception did.
Pattern 5 Misperception Turns into HostilityWhen you believe the other side is extreme, you become more willing to treat them as a threat.7
Both Democrats and Republicans vastly overestimate how many on the other side support political violence. The result is a populace primed to assume the other side is ready to do horrible things.
Each step feeds the next. The distortion is self-reinforcing.
Okay. So now you know that a small minority dominates the feed.
You know that Republicans and Democrats actually have a far more nuanced set of opinions about contested issues.
Does that fix it? Not really. You also know that everyone else doesn't know it. And if the world continues operating as if the distortion is real, you should probably act the same — even though you know it's wrong. The room hasn't changed, even if you know people inside it are confused.
This is called a common knowledge problem.
Steven Pinker lays this out cleanly in his excellent recent book When Everyone Knows That Everyone Knows.8 Learning a fact changes what you know. Seeing it displayed publicly — where everyone else can see it too — where you know others can also see it, changes what everyone knows, and subsequently how they act.
Social media has no public square. It has 300 million private windows, each showing a different distortion of the same room. Illuminating the common thoughts between us has the potential to radically change it.
So what can we do about this? Fortunately, there's some good evidence showing how it can be fixed.
Multiple studies show that when misperceptions are corrected in a public way, hostility drops. Mernyk et al. found that a single correction reduced partisan hostility for a full month.7 Lee et al. found that correcting overestimates of toxic users improved how people felt about their country and each other.1
We can do this today.
Imagine every post on a contested topic had a quiet link beneath it. Not a fact check, a label, or a warning. Just a question:
Let's explore an example that cuts across political identity:
83% of Americans support a constitutional amendment to limit money in politics. 81% are concerned about the influence of money on elections, including 78% of Republicans and 90% of Democrats. 75% say unlimited spending weakens democracy. Only 15% believe unlimited political spending is protected free speech.
And yet, very little changes, largely because everyone assumes the other side is fine with it. The feed is full of people defending their team's donors and attacking the other team's. It might look like a 50/50 partisan battle, but it's not. It's a majority consensus that cannot see itself.
What if you could see this consensus?
Community Check draws from a random sample of platform users + robust national polls, surveyed independently of the content. The sample is statistically representative. The results update continuously. And critically: everyone sees the same numbers.
Nothing is censored. Nothing is labeled. The loud posters can still keep posting.
But you, the viewer will know where the community stands.
Fact-checking is a top-down approach that often times feels like someone is telling you what to think. This is just showing you what people already think.
Content moderation for many years now has been perceived as removing speech. This simply adds context.
Nor is this just a user-poll under a post. A poll beneath the content is already biased by algorithmic selection and those people that are viewing the content in real-time. A community check is different, as it's drawing from all platform users, coupled with statistically significant national surveys. It's an actual window into the views of the majority, not just the views of those looking at the post.
Short-form video is the fastest-growing vector for political distortion. The same dynamic applies — a small minority of creators produce the vast majority of political content — but video bypasses the pause that text gives you. Community Check can adapt. Tap through to see how.
Every platform already has the data. They already survey users. They already know the base rates. They already have the infrastructure to display context beneath posts. They just don't have the incentive.
But the unseen majority is the public. And the public deserves to know itself.
A tiny minority, dominating the feed. That's all it ever was. The rest of us were here the whole time, quiet and decent and waiting to be seen.
Honest objections deserve honest answers. These are the questions skeptics from every political perspective are most likely to ask.
How Community Check would work in practice, from data sources to platform integration.