False Consensus Effect

False Consensus Effect

The False Consensus Effect is a cognitive bias where we overestimate the extent to which our beliefs, opinions, preferences, values, and habits are normal and typical of those of others. On teams, this can lead to assumptions that everyone is on the same page without verifying, potentially causing misalignment and project delays.

The term “False Consensus Effect” was first introduced in the late 1970s by researchers Lee Ross, David Greene, and Pamela House in a study exploring the extent to which individuals believe their own behavior is typical. This study asked students at Stanford University to estimate how many of their peers would be willing to walk around campus wearing an embarrassing outfit. Those who agreed to wear the costume estimated that 63% of their peers would do the same, while those who refused believed only 77% would do the same.

This bias shows our need to validate our choices as reasonable and socially acceptable. Believing that our attitudes and actions are widely shared helps us feel “normal.”


When it comes to UX design, believing everyone thinks like you can lead to mistakes. Designers might wrongly assume what they prefer is what everyone prefers. This could make a product misses out on what the actual audience might need or want.

Planning teams might fall into this trap too, especially when they make guesses about what customers want. Teams may misjudge actual demand, overlook potentially lucrative opportunities, or fail to address potential risks.

For engineers, this bias shows up when making decisions about architecture or how to code something, plan out the structure of a project, or pick what technology to use. They might choose based on what they like or what they’re used to, without thinking about other ways that might work better for more people.

For engineers, this bias shows up when making assumptions about coding practices, architecture decisions, or tech stack choices. They may rationalize their approach based on their own personal preferences or what they’re familiar with. If they do, they may fail to consider the larger ecosystem or industry best-practices that could inform better solutions.

🎯 Here are some key takeaways:

Challenge your assumptions

Don’t assume everyone on your team shares the same perspectives as you. Regularly check in with your team and be willing to update your understanding based on new information, evidence, or different perspectives.

Question your intuition

The more experience you have, the more you’ll learn to trust your intuition. This isn’t a bad thing, but always make sure you’re checking your assumptions against new data as it comes to light.

Validate assumptions through rigorous processes

Implement structured methods for gathering feedback, conducting user research, and analyzing data to validate assumptions rather than relying solely on individual intuition.

Recognize potential blind spots

Acknowledge that your personal perspectives may be shaped by individual biases or limited contexts, and be open to alternative viewpoints that can broaden your understanding.

Don’t focus on extremes

Extremes may be memorable, but they are likely rare. Accounting for edge cases is important, but remember that this is probably not the norm, so use your time spent dealing with these cases wisely.

Subscribe to get a new bias in your inbox every Friday!

    We will not SPAM you. Pinky swear!

    Type at least 1 character to search

    Thanks for signing up!

    Wil you help keep the show independent and ad free?

    Buy me a coffee

    $ 5
    • My heartfelt thanks
    • One time charge