Community Comment: Part 17 - Implementing solutions is more important than estimating build effort
- Implementing > Estimating
- Fine-grained software effort confidence level estimates meaningless
- As with story point estimates, categories better
- Breaking down problems decreases novelty, increases predictability
The comments I provided in reaction to a community discussion thread.
Enterprise Agile Strategist at Cloud Content Management Firm:
Q: How do I get accurate estimates?
A: Estimates are BY DEFINITION inaccurate.
Value Stream Architect at Financial Technology Service Provider:
What level of accuracy are you asking for? What is your risk tolerance?
70% confidence?
80% ?
85% ?
Gfesser:
So I would personally question a development team debating about confidence levels with only a 5% or 10% spread, because the choice between such fine-grained levels is arguably arbitrary, making the differences essentially meaningless. I tend to think categories of confidence are better to use than percentages, just like point estimates. For example, "low", "medium", and "high", keeping in mind that "medium" confidence doesn't really provide any value. Low confidence means novel development is being tackled, and this type of work can't be predicted. High confidence means either similar work has been performed in the past, or the work is straightforward and relatively easy. The bottom line is that once sizing or confidence is quantified, it gives the impression that developers have more predictive ability than they really do. And remember, past performance cannot simply be projected on to future endeavors unless the work is extremely similar, in which case it might as well be automated.
Value Stream Architect at Financial Technology Service Provider:
Erik Gfesser "high confidence means either similar work has been performed in the past…but remember, past performance cannot simply be projected on to future endeavors."
Would you reconcile these two parts of your response please?
Gfesser:
Sure. High confidence is one thing, an accurate prediction is another. These aren't equivalent.
Value Stream Architect at Financial Technology Service Provider:
Thanks. I tend to subscribe to the idea of probabilistic forecasting. I have found that using past performance is very informative and useful in forecasting future work. When comparing methods (estimating vs forecasting) side by side for periods of time, forecasting models were far superior. The kicker for me was the amount of time the two methods require. One requires no additional time from developers and the other did. Not to mention the psychological safety violations imposed on the team, the additional transition layers require to turn arbitrary sizes into meaningful time values and so on.
I would rather have the team focusing more on understanding the problem they are trying to solve and breaking down that problem into smaller problems than worrying about putting some estimation figure on it.
If you are able to direct me to some piece of work that creates a good argument about how humans are good at estimating things, I'll reconsider. Until then, I'll continue to leverage statistical analysis and let math do that job for me.
Gfesser:
Developers aren't adept at estimation, and my comments weren't in defense of estimation. I was simply commenting on making use of percent confidence in response to the other poster. Yes, statistical analysis would be more accurate when applied to lower degrees of novelty. After all, confidence intervals are a statistical concept. And breaking down problems tends to decrease the novelty of each component, increasing predictability. But as you implied, what provides greater value: estimating how long it will take to implement a solution, or actually performing the implementation? I think we can at least agree that the latter is the true goal.