“Those Numbers Can’t Be Right.”
When the data challenges expectations, dive in. This is where good stuff happens.
You are in a meeting. A dashboard is shared, and a report appears on screen. A number jumps out - maybe it’s higher than expected, or maybe lower. Way lower.
Someone says what everyone else is already thinking: “Those numbers can’t be right.”
And sometimes they aren’t. But just as often, the number itself isn’t the problem. What’s actually being challenged is our expectation of what the number should be.
These moments are signals to slow down and take a closer look.
Despite the internal groan, great data conversations happen when the numbers don’t match the story in our heads.
In this post, we will walk through an example of how one product team handles this situation as well as the process that great teams use for investigating data that doesn’t match expectations.
When the numbers don’t make sense.
Imagine a product team that has recently introduced a new feature inside their main product that allows users to generate a report, a task previously requiring cumbersome manual work.
Customers have been clamoring for it. The internal team is excited about it. They’ve highlighted the new reporting feature in release notes, mentioned it in onboarding messages, and demoed it internally. Everyone assumes clients will jump in right away.
Two weeks later, after a robust launch campaign, the internal team pulls up a usage dashboard. The usage number is… surprisingly low. Only a tiny percentage of users appear to have engaged with the new reporting tool at all.
Around the table, people start doing the quick mental math. Something about the number doesn’t feel right. What’s going on?
There are a few possibilities. And in practice, it’s often a mix of them.
There is a problem with the data.
Sometimes the issue really is the data.
Our product team’s usage metric depends on analytics signals that fire when someone uses the reporting tool. After digging into the setup, the team discovers those signals were only triggered when users opened the tool from one specific place in the product.
But many people were reaching it through other paths like shortcuts, saved links, or existing workflows. Those sessions simply weren’t being counted.
There are many alternatives that may cause problems with your data, even it is an established report. Data pipelines break. Dashboards miscalculate. Definitions drift over time.
The metric isn’t measuring what you think.
In another scenario, the data itself is fine, but the metric isn’t measuring what the team assumed.
The team thought the metric would count people who opened and used the tool. Instead, it counted only people who successfully finished creating a report.
All those users who explored for a moment or found the data to answer a quick question, and then left? Ignored. The dashboard reflected only users with reports completed, not usage.
Once the definition was clarified, the problem to investigate was no longer, “why isn’t anyone using the tool?” but “why are people starting the process, but not finishing it?”
The underlying assumption is wrong.
Sometimes the number is accurate. Disappointing? Sure. But wrong? No.
When reviewing the data more closely, our product team notices something interesting: the reporting tool is being used only by a few users, but they have logged in multiple times. They identify that only the power users who regularly analyze large datasets are the ones logging in. And most customers only need that kind of reporting occasionally.
The new feature isn’t failing. It’s just for a more specialized audience than the team originally expected.
That realization changes how the team thinks about improving the tool and how they introduce it to customers going forward.
This is often the most difficult situation to accept. In this case, the feature wasn’t as applicable as they’d assumed, so hopefully they learned how to pressure test customer feedback better.
These moments show up everywhere
Mismatches between data and expectations appear across all types of organizations:
A customer service team runs their quarterly client satisfaction survey and sees scores plummet unexpectedly.
A small business owner notices a shift in sales patterns that doesn’t match their day-to-day experience.
A library checks its children’s program dashboard and finds attendance numbers that feel surprisingly low.
In each case, the number doesn’t quite line up with what people thought was happening.
And the best teams investigate…doggedly
Sometimes the explanation is technical like a data pipeline problem or a change in how the metric is calculated. Other times the number is pointing to something real that simply hasn’t been noticed yet.
Strong teams resist the urge to immediately defend or dismiss these situations. Instead, they treat the moment as a serious investigation. When pursuing an explanation, they typically follow this path:
1. Check the source
Is the data pulling from the right system? Are records missing, duplicated, or only partially captured? Has the calculation changed? Has the report been refreshed?
This requires investigative effort for you and your technical partners. If you aren’t familiar with how that report is created, now is a good time to start learning about the data pipeline.
The good news? In this situation, once the issue is corrected, the numbers change significantly and tend to return to “normal.”
2. Confirm the definition
What exactly does this metric measure? Is it counting the thing people assume it is?
Data’s intangible nature means that we often are not speaking the same language. In this case, a picture is indeed worth a thousand words. If you suspect your metric is not measuring what you think it is measuring, meet the report builder and sketch or show the behavior that you think you are measuring.
Even if you both are using the same words, don’t assume that their definition of ‘customer’, ‘sales’, ‘conversion’, or ‘traffic’ is your definition.
3. Revisit assumptions
If the number holds up so far, then it’s time to re-assess your assumptions. What else might explain the results? Are we expecting something that isn’t actually happening?
This is when you find your assumptions need adjusting. Sometimes, you even discover that a fundamental belief or ‘truth’ about your business or industry simply is not accurate.
These perspective-altering pieces of information are incredibly valuable, but can be difficult for you, or your organization, to accept. If this is your situation, triple check the data and look for explanations that might help you interpret what you are seeing.
Why this matters
Handled this way, the moment of skepticism becomes productive. It helps the team surface the real issue and focus on the problem that actually needs solving. Teams clarify what their metrics actually mean, surface hidden data quality issues, and notice real patterns that might otherwise go unnoticed.
This effort strengthens the team’s relationship with their data and with this familiarity, they also gain more confidence in the numbers they rely on. Just as important, the culture around data shifts. Instead of defending assumptions or dismissing surprising results, teams get more comfortable investigating what’s really happening.
So when someone looks at a report and says that number can’t be right, they might be correct. But even when they aren’t, the reaction is still valuable.
It’s an invitation to pause and ask a few better questions: Where did this number come from? What does it actually measure? And do our assumptions still hold?
Have you ever had a moment where the numbers didn’t match what you expected?







