by Ger Cloudt, author of “What is Software Quality?”
Organizations like predictability in their development projects. High predictability enables e.g. Sales to sell and actually deliver. It enables the organization to negotiate contracts which can be fulfilled and if obligations are met no sales will be lost.
However, it seems that predictability of software development is low. Agile principles, like developing in small increments with a Potential Shippable Product at the end of each sprint, is a way to address the predictability problem in software development. Having small increments enables frequent deliveries, in time, but maybe not with the wanted scope. This is a less problem because a next release, which will be available soon, will contain the missing features.
But still, this incremental approach is not applicable to every software development and thus we are back to the predictability of software development.
In the development of multi-disciplinary embedded products predictability still is important. Since releasing and updating the software is not as frequent as in a true Agile environment, available functionality in a release becomes more important because if not in, the customer needs to wait for a next release which might take a considerable amount of time or even worse if the product is not remotely upgradable.
Cone of uncertainty
In 1995 Boehm et al presented the estimate converge graph, also known as the cone of uncertainty stating that for any given feature set estimation precision can improve only as the software itself becomes more refined.
Investing more time and effort in developing requirements, understanding requirements , analyzing requirements and even building the software itself will result in more accurate estimates as the cone of uncertainty shows. But still, even if everything seems to be clear, complete and understood we still will have uncertainty which will cause deviations from the plan. Uncertainty cannot be banned and therefore predictability will never be as accurate as 100%.
Puzzle analogy
Then, how to explain that software estimations are difficult and unprecise? For this I would like to use an analogy of solving puzzles like e.g. crosswords, cryptograms or Sudoku’s.
If I want to explain the difficulties we experience in software estimation to somebody I would ask the person to estimate how long it would take to solve a booklet of puzzles? Most likely you will get questions in return like, what is the difficulty level of the puzzles? What kind of puzzles are your referring to? How many puzzles need to be solved? How big are these puzzles? Am I allowed to use a dictionary? Well…., I do not know but anyway please provide me an estimate. You can imagine the accuracy of such an estimate.
This question of estimating the needed effort to solve an unknown bunch of puzzles can be compared to asking estimates typically for road mapping purposes. The development team is provided with some one-liners of the requirements and an estimate is asked, to be able to plot a roadmap.
Then we can take a next step and provide the actual booklet of puzzles. Having a look into the booklet will provide better insights and most likely the initial estimate as provided will be adjusted towards the new insights.
The more time you spend to have a look at the puzzles and understand the difficulty, the better the estimation to complete will become. This is in real software development comparable to your collecting, understanding and analyzing of requirements and maybe here and there perform some pre-development or prototyping for high risk areas.
To achieve an even better estimate you could not only have a look at the booklet of puzzles, but actually solve some puzzles. Measure how long it takes to solve and count the remaining, not yet solved, puzzles. This is what we call measuring velocity and applying it to the remaining work to predict when to deliver.
You would expect a pretty accurate estimate, right?
However, when several puzzles are made and velocity seems to be stable I will come in and tear out a number of puzzles and add another number of puzzles to the work to be done. Typically this can be compared to changing requirements and adding requirements during the project which will happen throughout the development.
Another problem you will encounter during your puzzle solving is that suddenly you will encounter puzzles with an unexpected very high complexity. If the puzzles you solved so far are in the complexity area between 3 or 5 stars you will encounter some puzzles with a complexity much higher and your velocity will drop tremendously. Encountering these high complexity puzzles can be compared in encountering difficult problems during development, hard or nearly impossible to reproduce and even harder to solve. Also you will encounter puzzles which have a relation with previously solved puzzles and to be able to solve these new puzzle you have to redo the previously solved puzzles.
And then we did not yet talk about external influences in your puzzle-solving. What if you are out of pencils? Just because I come into the room and want to replace all pencils by a cheaper type of pencils? Or suddenly your dictionary will be lost? Compare it to Corporate IT performing a security update on the network. If you are lucky the update is done in the weekend and not affecting your project but there is a risk you will have problems on Monday.
If estimating puzzle solving is already hard… how about estimating software?
As you can see, estimating solving puzzles, an activity everybody can perform, is already hard and inaccurate. Imagine the development of a big software system which only can be done by highly educated engineers. How can we expect accurate estimates?