Lesson 12: Why You Should Never Roll Out a New Process Without Doing This First
There’s one surefire way to kill an exciting idea at work: dropping it on everyone before you’ve actually tried it out yourself. Nothing drains the energy from a process faster. The solution? Run a pilot first – it saves you the headache of endless debates and lets the results speak for themselves.
Honestly, most leaders have been there, even if they don’t realize it. You spot a smart shortcut or a better method. On paper, it looks like a winner, so you rush to roll it out everywhere. But you barely finish the announcement before the resistance starts. Suddenly, every exception crops up. People insist it won’t work for them. Those who weren’t part of designing it dig in their heels. That shiny process improvement fades away and turns into an uphill negotiation. In the end, you either force the change (so people just pretend to use it) or scrap it because you can’t get enough buy-in.
All this drama? Easy to dodge. Just test your idea on a small scale first.
The Principle: Pilot Small, Measure Fast, Decide with Data
If you saw the last post, I talked about the three-item boundary checklist – a simple tool to clear up handoffs so nobody’s left guessing. Maybe you’ve got your checklist ready, but here’s what you need to avoid: don’t roll it out everywhere at once.
It’s not that your checklist isn’t solid. You just haven’t proved it’ll work for your specific crew. And if you don’t have proof, the doubters aren’t wrong – nothing backs you up.
Running a real pilot changes the whole dynamic. Test your checklist on one process. Watch what happens. Gather numbers and feedback. Now, when it comes time to pitch a wider rollout, you aren’t stuck with “trust me.” You’re armed with actual results. Conversations shift – getting buy-in feels way easier.
This isn’t just about tactics. It’s about the bigger idea: You aren’t trying to be right. You’re trying to see if your fix actually works – and to find out fast, while it still matters.
A Real Example: How a Two-Sprint Pilot Settled a Political Tug-of-War
Here’s how this looks in practice.
There was this team with constant delays. Work kept bouncing between groups, but nobody spelled out who was handing what, or when. The endless questions, rewrites, and slow acceptances drained days from already tight deadlines. The boundary checklist was supposed to solve it.
Cue instant skepticism. Another checklist? More steps? Under pressure, everyone’s wary of adding extra work. Some complaints were legit, others were just people resisting change for the sake of it. Hard to tell from the outside.
Instead of arguing, we ran a pilot.
We used the checklist for three features over two sprints. One process owner, one workflow, controlled scope. Big enough to matter, small enough to keep it manageable. Before starting, we set clear goals: cut down those clarifying questions, speed up acceptance across teams. Measured everything against the baseline.
Here’s what happened:
- Clarifying questions dropped 60%.
- Acceptance time improved 20%.
- The “rollback” in the checklist got triggered once – which actually stopped a problem before it reached customers.
The numbers did what words couldn’t. When it was time to talk about scaling, it wasn’t about theories – just hard results. “Here’s what happened, here’s what we expect next – is this worth rolling out?” The skeptics didn’t disappear, but their arguments softened. The pilot had answered the toughest questions.
How to Run a Simple, Effective Pilot
Step one: Pick a high-impact workflow.
Skip the easiest or hardest – go for the area where confusion and slow handoffs cause the most pain. That’s usually where clarifying the process makes the biggest impact.
Step two: Set clear success criteria before you start.
Don’t skip this. If you start fuzzy, you’ll end fuzzy – skeptics will shrug off the results.
Pick two or three outcomes you can actually measure, without making extra work:
- How long does it take for the receiving team to pick up the work after handoff?
- How many rounds of clarifying questions before the team can move forward?
- How often do you trigger the checklist’s “rollback” and does it actually snap things back on track?
Get baseline numbers first – without them, you’ve got nothing to compare against.
Step three: Run two full cycles.
Two sprints, same owner. It’s fast enough to keep everyone invested, but long enough to get past the learning curve. The first sprint will uncover the bumps; the second shows if the fix actually sticks.
Step four: Log the results.
You don’t need every detail. Just enough to be believable – track handoff times, questions, and rollbacks in a shared doc.
Step five: Let the numbers lead.
When the pilot’s over, share the data before you suggest anything. If the pilot worked, the numbers will make your case. If it didn’t, the results point you toward better tweaks. Either way, you’re showing evidence, not opinions.
The Honest Lesson
I will admit, I once ran a pilot with no measure of success. We had the right idea – pilot first, scale later – but I let the details slip. When it ended, the people involved felt positive, but we couldn’t prove it to anyone else. The naysayers didn’t need solid counterpoints – just the empty data was enough. The pilot got buried. The improvement stalled for a quarter. Later, someone just dusted off the same solution and pushed it through – with more muscle behind it.
Clear criteria don’t just help you measure – they guard against office politics. They shape the final conversation: are we debating, or showing the answer?
Fuzzy goals lead to fuzzy results, and fuzzy results turn into endless arguments. Arguments rarely solve the actual problem. So, lock in what success means before you start. Grab your baseline. Run two sprints. Let the results make your case. That’s how you turn a pilot into a real win, not just another frustrating battle.
