No Estimates

07 Mar 2015 17:24
Tags

Back to list of posts

One%20does%20not%20simply%20give%20an%20estimate.jpg

I recently became aware of the recent Twitter #NoEstimates movement, thanks to this post on Slashdot. It goes back to this blog post by Woody Zuill. The hashtag is perhaps deliberately provocative, but has provided a useful forum for debate. The good news is that there is a well-documented but relatively unknown process that is essentially the same process that he describes: evolutionary project management. Anyone who has worked with me over the past decade should recognise some of this.

As always on Slashdot, the comments thread is way more informative and entertaining than the original article, accompanied by forthright language, for those of a nervous disposition. Here’s my own short summary of the collected wisdom of the developer crowd. Please let me know your own take on this!

The "Double It / Convert to degrees Fahrenheit" Approach

So, what do you do when a manager insists they need an estimate, even after you have spent 15 minutes trying to explain to them the process of engineering discovery? Several of the posters disclosed their personal algorithms for estimate inflation, like this (double it), this (triple it), and this (convert to Fahrenheit).

Although this can work in the short-term to get a manager off your back so you can back to actually doing productive work as soon as possible, it’s going to come back to bite you. The systematic problems here are the lack of trust and the lack of organisational learning: the next estimate goes through the same algorithm, and history repeats.

The "Function Point Analysis / Story points / T-Shirt Sizing" Approach

Maybe pulling a number out of the air can be improved upon. Maybe we can substitute some kind of proxy for a seemingly 100% accurate person-days estimate? Back in the day, Function Point Analysis was a popular management technique, which always seemed to me a fairly meaningless measure of bullet points in a requirements specification somewhere.

More recently, the agile movement has adopted this in a different form as story points, and T-shirt sizing (S, M , L, XL, XXL etc). Both attempt to help prioritize work based on the collective perceived complexity of a task, using an arbitrary scale, perhaps anchored to a well-known ‘M’ – sized task. By measuring the number of story points delivered by a stable team over a number of iterations, the idea is that the average ability of the team to deliver stuff can be found (‘velocity’), and use that for future estimation.

This technique has a couple of major weaknesses however. It is impossible to compare velocity between teams, because of the privately formed consensus within each team of the interpretation of each point number or T-shirt size, although this probably won’t stop managers from doing so. It is also very difficult to stop slipping back into the trap of equating story points to number of effort-hours or days. The key here is that it is a measure of complexity: the more complex something is, the greater the likely variation in required effort, or, in other words, the greater the +/- uncertainty should be attached to any effort estimate.

The "That's too long, you need to give a more realistic deadline" Approach

As if estimating itself is not sufficiently difficult, the real problems starts when the numbers are then used, manipulated and abused by others in an organization, usually when they are communicated up the hierarchy and turned into a deadline.

The really valuable information related to the original estimate (the uncertainty, assumptions, caveats etc) get lost, if there were ever documented at all, and the “4 weeks” estimates gets turned into a Today-plus-four calendar weeks deadline. People assume you will work 100% on the task and nothing else, or believe that a bit of pressure makes people work harder.

In any organisation that has more people that can comfortably stand around a single coffee machine, there is a need to formalise the co-ordination of tasks between teams (sales, marketing, support) so that the overall delivery is met. Estimates are used, but become meaningless once their context has been lost, because they are not updated to reflect the new reality as work progresses. They become a tool to pass the blame once the deadline inevitably sails by.

How can this situation be improved? When have you seen management hierarchies pay real attention to the risks and uncertainties attached to estimates, and then track them over time?

The "Give up" Approach

If you are very lucky, and work for one of the founders of Stack Overflow and Trello, you really can adopt a #NoEstimates approach. Joel wrote an influential blog post back in 2007 on Evidence-Based Scheduling, which is well worth a read. According to one poster on the Slashdot comment thread, Joel’s company uses a single estimate of ‘6-8 weeks’ for everything. Much smaller things, just do them, much bigger things, don’t do them until you can break them down into smaller things.

And finally, perhaps a way forward

I am sure most people reading this can recognise examples of all these approaches. It seems to me that there are a few fundamental themes that run through them all:

  • Lack of understanding about what estimates actually are: they are not single numbers but big blobby things with uncertainties, assumptions, and constraints.
  • Similarly, a lack of understanding of how to combine estimates (of smaller tasks into more complex ones, tasks between different teams)
  • The only estimates used for scheduling and decision-making are measures of effort & time. I only found a single post out of 250+ in the comment thread that suggested estimating business value as well as level of effort might be a good idea.
  • Human psychology cannot be ignored. We all want to be Scotty Miracle Workers, line up someone else to blame, and believe one-word requirement changes can have no impact on a schedule.

I would suggest we need techniques that help us:

*Focus on understanding what we know, and, more importantly, what we don’t know yet in sufficient detail to make good decisions

  • Balance the estimation of required effort with an equivalent analysis of the benefits we hope to gain in doing the work.
  • Increase transparency and objectivity of setting deadlines and milestones
  • Recognize constraints and the wider environment in which decisions are made, both internal (people, process) and external (customers, suppliers, regulatory)

Tom Gilb, based on his work at IBM in the 1960’s, devised a set of techniques and tools specifically engineered to address these needs, which he calls “Competitive Engineering” (“CE” for short). More recently, his son, Kai Gilb, extended them into an “agile-with-brains” methodology they called “Evolutionary Project Management” or “Evo” for short. I have been successfully applying these ideas to my own professional work since I first encountered them at Citigroup in 2005.

Although CE and Evo provide a comprehensive suite of requirements engineering and project management techniques, their core is built around a process of estimation:

*Estimation and quantification of resources and key stakeholder goals at the same time

  • Estimation of the impact (positive or negative!) of alternative possible designs or strategies on the achievement of the goals and resources
  • Estimation of the uncertainties, risks, and credibility of all of these estimates.

The process assists all stakeholders, from management through to developers, to attempt to quantify the needed benefits and value (‘how much “usability” do our customers actually need?”). The resulting numeric estimates are a just a useful side-benefit of this technique : the real value is in having these attempts in the first place. Better than any other method I have seen or used, it forces you to evaluate what is really known about the requirements, and makes it very clear where there are major holes or unknowns before you can proceed. This can work at any level: from corporate board level to individual agile sprints.

Get in touch!

Please contact me if you want to discuss any of this in more detail. I have previously written a review of the Competitive Engineering book and a detailed case study from one of my past projects at a large investment bank in which these techniques were used.

Comments: 0

Add a New Comment

Unless otherwise stated, the content of this page is licensed under Creative Commons Attribution-ShareAlike 3.0 License