Late Projects Caused By Poor Estimation and Other Red Herrings
I've been seeing a pattern lately with Agile projects.
It's not a new pattern. It's one we've all likely seen on more traditional development projects for years.
The story goes a little like this:
A customer needs a software solution to a problem. An Agile team swoops in and in a reasonably short amount of time, and in collaboration with customers and end users, writes a bunch of user stories. Developers then estimate the stories. Together we spread these stories out over time and build a release plan. The customer makes some tough tradeoffs and winnows the release plan down to a set of stories that we all believe can be completed by the release date. The customer cautions "We really need to hit this date… we've got a lot of people depending on this software." The story estimates have been "marked up" - a load factor applied based on historic team velocity. We're comfortable we can deliver these stories by the release date.
Those of you with a few years experience in software development have some guess about what happens next.
As we start to develop, we find that stories are taking longer than was originally estimated.
Sometimes a little longer, sometimes a lot longer.
As time goes by, we realize we're going to be really late with this delivery. Fingers start pointing:
"You developers estimated this horribly. How could you be so far off!"
Developers respond: "We don't know what happened… as we get more details about stories, everything is much more complicated than we originally expected. Is scope expanding? Many of these estimates came from developers no longer in the team. It must be them; they're the ones who really screwed up!"
Along the way someone decides that we should have done more initial analysis and design - more details about exactly what we should be building. Then we'd have known enough to more accurately estimate.
Our company needs to be able to deliver software more predictably, so our organization launches into lots of discussion and research on how we can improve the quality of our estimation.
I think it's all nonsense.
The estimates were fine. We're focusing on the wrong thing.
Estimates are based on a shared mental picture
When you think about an estimate for software, especially an estimate based on an Agile user story, the notion is a little crazy.
A user story describes a need the user has. During conversations with someone in the customer role the developer forms a mental picture on how the story might be implemented. He'll compare that mental picture to other similar things he's implemented before. The customer and developer may sketch a little user interface to decide how the UI might look and behave. Developers might talk with each other about how they might implement the more difficult bits. An estimate is finally given.
All those discussions helped the developer build the mental picture he needed to estimate. Those discussions were rich with assumptions about what the UI might look like, what might be on it, and how the internals of the code might be implemented. These assumptions were made in the context of release planning where we all as a team - both customers and developers – are concerned about getting lots of functionality built in a small amount of time. Our mental pictures are likely simple - as simple as they will ever be again.
Soon after the estimate is given, those assumptions begin to fade. Within a short amount of time, the mental picture we held is like a dream we had weeks ago.
Initial estimates are based on a mental picture we build of the software. That picture is based on our assumptions of both what the solution is that best solves our customers' problem and how we might implement the software.
As we draw closer to implementation time, we rebuild our mental pictures
As we start to build and implement these user stories we have many of these conversations again. The conversations go a bit differently this time.
We know more now because we've built some of the software already. We write down more. We draw the user interface more accurately. We envision software that best solves our problems. Only this time the magnitude of the solution is often larger than the first time we envisioned it.
There are lots of good reasons this happens.
We really do know more about the problem after some time has passed and we've started to address it, and that will naturally change the characteristics of our software solution. Sometimes the problem really is more complex than we expected. Sometimes we just want more out of the solution than we were willing to settle for weeks or months ago when we originally envisioned and estimated. Where before we were concerned with making estimates that would fit our desire schedule, now we're a bit more focused on making sure we build the most desirable solution.
No matter how we look at it, what we're building today isn't likely to match the mental picture we imagined back then - and it shouldn't. It would be foolish to ignore new information. The important thing to remember is that the magnitude of our current solution shouldn't be too far out of line from our original estimate. That can be harder than it sounds.
To keep projects on track, we need to work hard to keep our current mental pictures of our software about the same magnitude as our estimated mental pictures. Since we know more now, expect our current mental pictures to be different from our original pictures.
Watching the magnitude of our solution is what I refer to as managing scale.
Building a shared mental picture is difficult
Mistakes do happen.
Sometimes the software solution we imagined really wasn't accurate at all. We imagined it far simpler than it really needed to be. Simpler both functionally and technically. Perhaps we should have built more prototypes… spiked some potential solutions. Perhaps we should have built more UI prototypes to validate user interaction. It's hard to share the same mental picture using words alone.
At this point we can start pointing a lot of fingers. Or, we can see the glass half-full and count our lessons learned so we can do better next time. But, at the end of the day, if we really want to help our customer these aren't very satisfying options.
Treat estimates as solution budgets and focus on solving the original problem
It's time to roll up our sleeves and try something completely different.
It would be foolish to think that the software we originally envisioned was the only possible solution to our customers' problem.
We need to go back to that problem understanding and brainstorm about alternative solutions that can meet our original estimates… actually maybe even lower than our original estimates since we've likely already burned up time building the wrong thing - and gold plating it to boot.
If we can't envision a current solution that matches the magnitude of our original mental picture, go back to the original problem your software solved, and invent a new solution that solves the same problem.
My point with this story is that blaming estimates and estimators is not only foolish and unproductive, it's likely a distraction from the real issue: solving your customer's problem at the price you estimated. It's a red herring.
It doesn't hurt to reflect a little on estimation - to recognize that an estimate is your guess at what it takes to build a software solution - a solution that's also a guess at what might solve your customer's problems.
"But wait," you say. "Those were our requirements." True. The people that told you that's what they wanted really did think that would solve their problem. The word requirement implies and objective finality that that just isn't necessary. At the end of the day, it's a solved problem our customers hope to get - not a shiny new half-built solution. You can't just gather requirements. You need to solve problems.
Treat estimates given at the outset of a project as a budget for some possible solution, not as a bid for one specific solution.
Lister's Dead Fish
Estimation and requirements are two of the things Tim Lister suggests that nobody wants to talk about on software projects. They're big problems. And the solutions aren't obvious. They're elements of the "dead fish" that Lister talks about.
Dead Fish and Red Herrings
When we do get the courage to talk about dead fish issues we need to then work hard not to fall prey to what look like obvious causes. These are our red herrings.
Before posting this, I showed this to someone who's opinion I trust. "Does this make sense?" I asked. "Actually it seems a bit obvious." she answered. "Is this really a problem? Does this really happen?"
In my experience it does too often. I've seen many teams and project focus on improving the quality of estimation. I've seen this lead to more focus on getting requirements right up front. By this I mean more focus in predicting and elaborating the solution than understanding the problem. I've seen this lead to prolonged requirements phases and huge unproven assumptions about what the solution should be. This could lead to on-time projects delivering solutions that don't actually deliver the business value they're intended to.
I've delivered lots of projects on time to happy customers. In almost all cases the solution delivered wasn't what people originally envisioned. For me, the secret to on-time delivery of software isn't better estimates, but managing those estimates as budgets to reach our desired solution.
Focus on accurate estimates after you've focused on solving your customer's problem. When estimating projects, estimate to set budgets. When designing features, design with the budgets you've set in mind.