What Leadership Looks Like When the Data Can’t Deliver
How to stop going down the rabbit hole

Years ago, a senior leader asked my data team to prove that our dataset, combined with a machine learning model we’d been experimenting with, could support a more granular customer product than anything the company had offered before.
It was a new, exciting methodology. The buzz spread quickly. People unfamiliar with the technology began making bold promises and sketching out roadmaps. The enthusiasm was contagious, and the pressure to deliver something impressive mounted fast.
If you’ve worked in data, product, or analytics, you’ve probably seen this dynamic before. Organizational excitement can take on a life of its own, creating momentum long before the viability of an approach has been tested.
When Data Hits Its Limits
What happened next made this task difficult. But not because the model failed. It didn’t.
We delivered. We developed the model, ran it, iterated, and hit the deadline.
What made it difficult was realizing that the outputs were asking us to pretend.
The model relied upon the assumption that at more granular levels, the company’s data would hold up - there would be sufficient, reliable information to support more detailed analysis.
But the moment we pushed in that direction, the foundation cracked.
At finer levels of detail, the data was not robust nor reliable enough to support the claims the product would need to make. Incorporating those results wouldn’t just risk being wrong. It would create the appearance of insight without the substance to back it up.
Why? You can run a model perfectly and still end up with nonsense, because the model can’t invent reality that isn’t in the data. Past a certain point, you’re no longer analyzing. You’re interpreting smoke.
My team and I demonstrated exactly where the data stopped reflecting reality. And just as importantly, we showed that a fix wasn’t a matter of “cleaning the data” or “tuning the model.” The dataset, as it existed, could not responsibly support the outcome the organization was hoping for.
When Data Work Stops Being Analytical
At that point, the real question wasn’t whether we could keep trying - running more iterations, plugging holes. It was whether it made sense to. We had reached the point where the data work stopped being analytical and started becoming performative.
If we’d kept going, the model outcome wouldn’t have been neutral. We would have created something polished - and wrong, where the output no longer reflected reality.
That’s a dangerous place to be. Not because the math is hard or the model is complex - but because the work starts to serve the momentum rather than the reality.
Knowing When to Stop
Whether you are leading a data-related project or you are on a data team, there’s a strong pull in data and analytics work to keep going. If the results don’t look right, try a different model. Explain away gaps. Refine what data “counts” until the results look cleaner.
All of that effort feels productive and looks rigorous. And when advanced methods or AI are involved, it signals sophistication. But there’s a point where continuing doesn’t get you closer to the truth. It just gets you closer to a story you want to tell.
Leadership shows up in recognizing the facts of your current situation and making the call to stop.
Leadership is knowing when to say: this is as far as this can go.
Slow the Room Down
Calling a halt is easier said than done.
There is pressure at every level to keep an exciting project moving forward - whether personal visibility, career momentum, team justification, market competition, or the expectations of leadership above you.
Resisting that pressure when the situation suggests a pause requires both data and leadership skill. It’s also uncomfortable, politically delicate and often risky.
When teams - leaders, product managers, data scientists - get excited about a new method, anchoring the conversation in reality creates space for better decisions.
A few well-chosen questions can slow the momentum and bring clarity back into the room:
What is the objective?
If the real goal is “prove this hypothesis could be ‘right,’” you’re already at risk of chasing a story instead of insight.What would have to be true in the data for this experiment to work?
In our case, it would have required the data classification process to be consistently accurate at a much finer level of detail - something it was never designed to support.Where is this data doing its job and where does it stop?
The data was fit for its original purpose and trusted for that reason. The problem wasn’t quality; it was overreach.What’s the risk if we’re wrong, and who pays for it?
If a polished output persuades customers or executives to act, the consequences become reputational, financial, or operational.
And a final gut check:
If we didn’t already want this to work, would we still pursue it?
If the answer is no, that’s usually your signal to stop.
What Comes After Stopping
Knowing when to stop isn’t just an internal judgment call. It’s also a communication challenge.
In this case, stopping meant clearly showing where the data stopped reflecting reality and translating the technical limits into terms that stakeholders could understand and trust. No jargon or drama. Just a focus on facts, boundaries, and consequences.
Just stopping isn’t enough.
Your stakeholders will naturally ask “Now what?” Leadership means being prepared for that question. If this experiment can’t responsibly deliver results, then what can? Take advantage of this opportunity to propose alternatives.
Anyone can push forward.
Leadership is knowing when to say: this is as far as this can go - and then helping the organization move somewhere better.
Share your examples! Have you had a data project turn into an exercise in leadership? Tell us about it!


