(Stop) Estimating
Marc Evers pointed me to a blog entry by David J. Anderson Stop Estimating. David explains his view on estimating from a theory of constraints perspective. Marc’s timing was impeccable, since unbeknownst to him I was having a discussion with a development team at a client’s site on the benefits and costs of estimating work.
The estimates are not really that important, as I said to Marc yesterday:
The clients have a sort of ‘open space’ attitude towards the development team: it’s done when it’s done
We agreed that this is not quite so bad as the development team felt about it. This gives the developers the space they need to do quality work.
“Why do the developers feel bad about this?”, I wondered . Precise estimation is not a reason to feel good or bad (if so, it feels a bit like moralistic programming to me). Estimation accuracy is information about the project, nothing more, nothing less. Frustratingly enough, it is very hard to turn this information into useful action.
For this particular team, I learnt that the development team is getting some definite benefits from the estimating activity, besides the estimates themselves. What we came up in the heat of the moment was:
Value of estimation
- (when predictable) possibility to make promises on delivery dates
- signal of uncertain or unclear wishes, or lack of clarity on design modifications (when estimates are difficult to make, or are proven to be highly inaccurate)
- creates communication and shared knowledge – if programmers make different estimates for the same task, they have to explain how they got to their estimate. This creates shared understanding.
Cost of estimation
- several hours a week, spent on discussing estimates, as well as design.
- time spent on meta-estimation – discussing why estimation works or not
One of the striking things when the development team reflected on their estimates yesterday was, that they’re quite capable of giving a rough estimate on a large, not yet well-defined task. Say, ‘this is going to take two to three months’. And then discover that estimating for the next two days is extremely difficult. Perhaps the answer is in David’s counter entry agile estimating:
An agile estimate should have a number of value units, a velocity, an end date and a buffer (or a measure of variation in velocity). That’s it. You shouldn’t be estimating anything individually. Fine grained individual estimates of effort are waste – muda! Just say “No!”
As a sort of ‘training wheels’ (detailed) estimation can be very useful. It helps a team become aware of all the work they are doing, makes the team aware of what they know, what they can know and what they don’t (and sometimes can’t ) know.
Trying estimation in a certain way has to be done consequently for a certain period of time – since the effects of estimation are mostly medium-term. After trying a set way of estimation, the team becomes more experienced, and is able to decide what they want to estimate, and how and why they do it.
Just saying no can be done in a responsible manner, once you’ve experienced several alternatives – varying from no estimation at all to too detailed estimation.
By the way, if you’re having an argument on velocity in your XP team, and you have been doing stories for a while, I definitely recommend David’s recommendation to take into account a measure of variation in velocity.
If you feel velocity is problematic, creating an information radiator for the velocity, and drawing an upper and lower variation band around it (say 5 or 10%) will show you if velocity means anything in the project, or something else is going on.
This was one of the first theory of constraints / lean – related things I took to my practice. The effect was ehm, interesting. I soon realized not everyone in and around the team wanted to see and/or hear that the variation was so large the velocity numbers didn’t mean anything.
Watch for fingers being put in ears when you explain the variation bands in the graph….