In Part 1, we covered the basics for critically evaluating data for decision making:
What is the data?
Where did it come from?
Answering those gives you a stronger grasp of what you’re actually looking at and whether it’s reliable for your decision.
But good data still has limits. It’s shaped by assumptions, context, and human judgment. Part 2 is about digging into those layers so you can:
Spot baked-in assumptions
Assess whether the data is reliable enough for your decision
Evaluate risk, ambiguity, and potential bias
Why does this matter?
You are accountable for understanding the data you act on.
And we want you to feel confident that the data you use to make key decisions is what you think it is.
So once you understand what the data is and its source, here are the next three questions.
Habit #3: What are the Assumptions?
Every number you see has a backstory and is built on choices. Some choices are clear (like timeframes), but others are often invisible unless you ask. Here are some common assumptions that often lead to errors or bad decision-making.
DON’T assume that:
Calculations are accurate. A surprising number of errors happen because someone’s formula is incorrect. We are not saying to re-check every calculation, but a spot check is a good practice.
Projections or estimates are using the assumptions you would use. This is very rarely the case so always feel confident probing the assumptions. For example, if you are given a revenue forecast, you’d want to dig into core assumptions like: What is the average sales price assumed? What is the assumed increase/decrease vs. this year? How many sales are estimated from new clients vs. existing clients? What percent of revenue is attributed to new products? This discussion is a valuable step for refining the estimate and projection.
The data is representative. Even the savviest data users can fall into the trap of assuming a data point reflects the full picture - whether that’s a country’s population or your total client base. But if a survey on fried chicken sandwiches is conducted with 1,000 college students, it doesn’t represent national preferences - it represents college student preferences. So always ask: Who’s actually included in this data? And who’s missing?
Habit #4: Is the Data Reliable Enough?
You won’t always have perfect information. So the real question becomes: Is this good enough for this decision? In our experience, it depends.
Consider the stakes. Low-stakes decisions, like an A/B test tweak, may not require bulletproof precision. High-impact decisions, like a large investment or launching in a new country, warrant more scrutiny.
Watch for false precision. A forecast of “4.2%” sounds precise, but is it that accurate? Sometimes we hear in the news a ‘fact’ like “4.2% of the population will go on vacation this summer vs 4.0% last summer.” But if these are based on two consumer surveys of 100 people each, that extra decimal is indicating a level of certainty that does not exist. A more accurate depiction could be to say that approximately 3–5% of people plan to go on vacation which is about the same as last year, perhaps slightly higher. And that estimate may be perfectly fine depending on the type of decision you are making. Being directionally right might be enough, but you need to know that upfront.
If a piece of data sounds surprising, dig deeper. If a usage metric is suddenly way up or way down, something may be broken. Even if it’s not an error, it is also important to find out the cause. For example, if subscription cancellations spike, could it be a competitor push or a product issue? If sales jump more than normal, was it a real gain or something else? Surprises are your cue to investigate. Sometimes they signal an opportunity. Other times, they signal a problem in the data.
Habit #5: Spot the Bias
Reports and data are built by people. And everyone has a lens they apply to the data - and sometimes an agenda. That doesn’t make the data wrong. But it does mean you should ask a few more questions.
True for this ≠ true for that. People often believe data about one group (a demographic, geography, or channel) applies more broadly. Not always.
If X% of Instagram users are buying from influencers that doesn’t mean all social media users do.
Many U.S. health studies were based on white men—but are applied to all genders and ethnicities.
NYC restaurant trends don’t predict what’s hot in Texas.
The top-selling toys in the U.S. won’t match those in France.
Past trends ≠ future trends: Don’t assume that previous behavior or trend will reflect future behavior or trends. While this can often be true, don’t take ‘It’s always been this way” as fact. Treat that like an assumption and pressure test it like you would any other. Disruptive examples may include: a competitor makes a breakthrough; an alternative technology gains popularity replacing another; a market trend picks up momentum and hits a tipping point; a product goes viral online.
Motivation matters. Everyone has goals, and incentives shape how data is framed. A drug company wants to show positive results to the FDA. A salesperson might tweak forecasts based on how they’re compensated. From students to policymakers, people interpret data through their lens. Motivation doesn’t make data invalid—but it does shape how it’s presented. Know the agenda, and factor it in.
Wrap-Up: Build Your Savvy
Being data-savvy isn’t about being a data analyst. It’s about being curious, clear, and confident.
You don’t need to catch every error, verify every figure, or rebuild the spreadsheet. That’s not the goal.
But if you:
Know what you’re looking at
Understand where it came from
Ask about assumptions
Assess the stakes and motivations
…you’ll be in a much stronger position to use information well, steer conversations effectively, and make decisions that hold up under pressure.
You’ve got this.
Was this helpful?
Click the ❤️ below. This helps other readers find this content and lets us know what resonates.



