The practical realities of software estimation

In IT we are often asked to estimate the expected time/schedule or cost of software development. Sadly, the desire of stakeholders to have “predictable” schedules or costs results in significant dysfunction within a software development team.  When a software team is  forced by their stakeholders to commit to a schedule/cost they must then ensure that the schedule/cost doesn’t slip. For example, to protect themselves from increased time and cost due to scope creep, software development teams will make it difficult for stakeholders to change their requirements during Construction and even go so far as to drop promised scope late in a project. The desire of stakeholders to reduce their financial risk often results in behaviors by the software development team that ensure that stakeholders don’t get what they actually want.  Naturally IT gets blamed for this.

Practical Realities of Software Guesstimation

We need to do better.  In this blog we summarize the things that we know to be true about software development estimation. In no particular order, they are:

  1. Estimates are guesses. Look up the word in the dictionary – An estimate is a rough approximation or calculation, in other words a guess. Unfortunately, too many people think that estimates are promises, or worse yet guarantees.  In our opinion “guesstimate” is a far more appropriate word that “estimate”.
  2. Scope on IT projects is a moving target. Our stakeholders struggle to tell us what they want and even when they do they change their minds anyway.  Any guesstimate based on varying scope must also vary in kind.
  3. Guesstimates are probability distributions. Although your stakeholders may ask for a fixed amount guesstimate, for example “this will cost $1 million”, the reality is that there’s a chance the cost will be less than $1 million and a very good chance that it will be more.  There is ample evidence that the initial estimate for a software development project should be given in a range of -25% to +75%, so your million dollar project should be quoted as a range of $750,000 and $1,750,000.  This is shown in the diagram above by the green distribution curve.  In many organizations this can be politically difficult to do, and strangely enough in many cases stakeholders prefer to be lied to (it’s going to be $1,000,000) rather than be told the truth.
  4. Guesstimates must reflect the quality of the inputs.  A guesstimate needs to reflect the quality of the information going into it – if your scope is fuzzy your guesstimate based on that scope needs to be equally fuzzy.  Sometimes stakeholders want a guesstimate with a tight range, perhaps +/- 10% (the red curve in the diagram), early in a project.  To provide a tight range such as this you need to have a very good understanding of the requirements and the design.  Early in the software development process this can only be done through more detailed modeling, an expensive and risky proposition which often proves to be a wasted effort because the requirements will evolve over time anyway.
  5. Guesstimates anchor perception. The primary danger of providing guesstimates to people is that they believe them.  Tell someone that it’s going to be $1,000,000 and they fixate on that cost even while they are changing their minds.  Tell them that it’s going to be between $750,000 and $1,750,000 and most people will fixate on the cost of $750,000.  Some people will focus on the average cost of $1,250,000 even though the median was $1,000,000 (guesstimates are in effect Weibull probability distributions).
  6. It’s easier to guesstimate small things. ‘Nuff said.
  7. It’s easier to guesstimate work you’re just about to do instead of work in the distant future. It is much easier to identify the details of work to be done right now, and thus turn a large piece of work into a collection of smaller pieces that are easier to guesstimate.  In part this is because you have a much better understanding of the current situation you are working in and in part because you are more focused on the here and now.
  8. The people doing the work will likely give a better guesstimate.   They are more motivated to get the guesstimate right, particularly when they must commit to it, and have a much better idea of their abilities.  Granted, someone may need to coach people through the guesstimation effort.  In Disciplined Agile this is a responsibility of the team lead.
  9. Someone who has done the work before will give a better guesstimate than someone who hasn’t. Experience counts.
  10. Guesstimates reflect the situation that you face.  Which organizational situation do you think will result in a short schedule and lower cost: Five people co-located in a single room or the same five people working from different locations in difficult time zones?  Or how about a team working under regulatory constraints versus the same team without those constraints?  Context counts.
  11. Multiple guesstimates are better than a single guesstimate. Getting guesstimates from several people provides insights from several points of view, hopefully prompting an intelligent conversation that enables you to develop a guesstimate with better confidence.  Similarly, the same person producing guesstimates for the same piece of work using different guesstimation strategies will also provide a range of answers that you can combine.
  12. Guesstimates should be updated over time. As your understanding of what stakeholders want improves, and your understanding of how well your team works together, you should update your guesstimates.  As your understanding of the fundamental inputs into your estimate improves you are able to produce a better estimate, thus enabling the stakeholders of that guesstimate to make better decisions.
  13. It costs money to produce a guesstimate. The precision of an estimate is driven by the detail and stability of the information going into it. Want a tighter range on your estimate?  Then you’re going to have to have a better handle on the requirements, design, and capabilities of the team doing the work.  This greater precision requires greater cost.  The fundamental question posed by the #NoEstimates community is effectively “Is the value of improved decision making capability from having the guesstimate greater than the cost of creating the guesstimate?” The implication is that you must ensure the cost is much less than the benefit, hence their focus on finding ways to streamline and even eliminate the guesstimation effort.
  14. Guesstimation is far more art than science. See point #1 about estimates being guesses.  The best guesstimates are done by the people doing the work, just before they need to do the work, for small pieces of work.
  15. Formal software guesstimation schemes are little more than a scientific façade. Function point counting, feature point counting, and COCOMO II are all examples of formal strategies.  They boil down to generating numeric guesses from your detailed requirements and design, plugging these guesses into an algorithm which then produces a guesstimate. These are all expensive strategies (they require detailed requirements and design work to be performed) that prove to be risky (because they often force you into a waterfall approach) in practice.  Yes, they do in fact work to some extent, but in practice there are much less expensive and less risky strategies to choose from.  People like these type of guesstimation strategies because they provide a false sense of security due to their complexity and cost.
  16. Past history isn’t as valuable as people hope.  Some formal guesstimation strategies are based on past history, but this proves to be a false foundation from which to build upon for several reasons.  First, people have different levels of capability which change over time as they learn. Capers Jones has shown that developers have productivity ranges of 1 to 25, the implication being that if you don’t know exactly who is on a team and how well they work together your corporate history will be questionable.   Second, technologies evolve quickly so past history from working with older versions of technologies or completely different technologies becomes questionable at best.  Third, people and teams change (hopefully for the better) over time, implying that an input into your guesstimate is fuzzy at best.  Fourth, because every team is unique and faces a unique situation basing estimates on past history from other teams in different situations proves questionable.
  17. Beware professional guesstimators. They tend to break many of the rules we’ve described above.

To summarize, when you are required to provide estimates for your software development efforts that you should take a pragmatic, light-weight approach to doing so.  This blog posting has provided many practical insights that should help guide your decisions.  These insights and many more, are built right into the Disciplined Agile (DA) toolkit.

Have any Question or Comment?

9 comments on “The practical realities of software estimation

Do the guesstimaters have training on which they pontificate? For example, if one is estimating the cost of creating an installer patch and have only “on-the-job” training, then their skill level is probably low.

It may come as a shock, but most of us prefer formally trained physicians.


I like all of these, but #14 stands out; people seem puzzled when you don’t know immediately how long it will take to do something. I am asked to do something sizable, I say give me 1 or 2 days to start and see how far I get, then we can extrapolate.

Have you thought about this in terms of fixed cost contracts? ( I am sure you have, but I ask again anyway.) Give you an example of a software company with a main product whose customers request enhancements, and the company comes back with a fixed effort/cost, and probable date based on its schedule of releases. The most experienced people are indeed making these estimates, but turnover is reducing this pool. A more flexible approach would be an improvement, in my single opinion, but how do you sell that to customers used to fixed cost?

Getting facts on how successful this has been over the years is tough, but my year of exposure tells me smaller projects usually meet the cost and time, bigger ones usually don’t; but I think each customer may only be exposed to the latter infrequently, so it may not be a big pain point for any one of them at a point in time, and there doesn’t seem to be a user group for them to compare experiences.


David, a few thoughts:
1. Asking for a couple of days to dive into the problem before providing a guesstimate is a good idea. You’re improving your understanding of the situation by doing that, effectively improving the quality of the inputs going into the guesstimate.
2. We’ve done a lot of work around fixed cost/bid projects. I have a few harsh words to say about this topic at . I think you’ll like some of the links at the bottom of the article that lead to articles questioning the ethics around fixed price projects. And of course we’ve built several strategies for funding IT delivery teams right into the DA framework, see
3. Selling customers on better strategies is easy if you have a trust-based relationship with them. They’re smart people, they can see for themselves that fixed price doesn’t work but they often don’t know that they’ve got a choice.
4. Smaller projects have a much higher chance of being on time and budget than larger ones. They’re smaller so they’re easier to guesstimate in the first place. They’re shorter so there’s less opportunity for things to go wrong. They’re smaller so likely to be less risky. The Standish Group has some good data correlating the size of a project and the chance of success (as projects get larger the rate of success goes down).

Valentin Tudor Mocanu

Disciplined Agile has few powerful tools to improve the estimation process.

( Disciplined Agile) Proven architecture milestone – must be a checkpoint to validate also the assumptions about architecture and solution made in the envisioning of the release.

(Re)Estimations inside the release – must be supported by modeling parts (clarifying requirements and design), where a good practice that should precede any kind of estimation must be the Look Ahead Modeling (we cannot have good Look Ahead Planning, containing corresponding estimations, without Look Ahead Modeling ) .
Note: Look Ahead Modeling is a Discipline Agile/Agile Modeling practice


Fantastic article! I couldn’t agree more that in many cases a development estimate is just an illusion and often sets unreasonable expectations. I work on a technology team where development is funded from several different lines of businesses following a project model. Between the funding model, frequent team member changes and rapidly changing technology and architecture, our software “guesstimates” seem pretty silly pretty quickly. Some of the time being spent upfront to come up with these guesstimates could be better spent elsewhere.

Glen Alleman

Brain, you’ve listed all the items on most Root Cause for project failures. Changing staff, inconsistent funding profiles, unreasonable expectations, changing architectures.
Scott’s article says when work is unstable, technology is unstable, the customer doesn’t know what they want and a myriad of other dysfunctions – estimates are just guesses.

In our domain – software intensive system of systems in aerospace, defense, embedded controls system – that’s called Doing Stupid Things on Purpose (DSTOP).

When you DSTOP, you’re estimates are no better than uniformed opinions.

So just to be clear – Yes an estimate is an approximation. An approximation derived in one or more of standard approaches. No estimates are not guesses. Guesses are uninformed by data or models. A guess is an estimate or suppose (something) without sufficient information to be sure of being correct. So if you’re making estimates without sufficient information than you’re guessing. And when you’r guessing your not estimating.

But Estimates are NOT guesses. The article makes that claim, then sets out to show how bad management, bad estimating practices, and low maturity teams result in “guessing” and called it estimating.


Great article! I have gone so far as to say estimates are never wrong. I claim that because the information I have at the time drove the estimate. I grant you it may not represent what actually happens but I don’t know that at the beginning of a project. And, as your article state, so much changes over the course of a project.
When I provide an estimate, I provide a confidence factor that conveys roughly the amount of information I had to create the estimate. I also set expectations that I’ll change the estimate over the course of a project as information becomes available.
Number 16 above has lots of great points on history I’ve never realized – thanks!

Glen Alleman

Here’s an approach to estimating in our Software Intensive System of Systems world,

The articles suggestions about the “trouble with estimates” is not about estimating it’s about Bad Practices and Management and Estimating. I’m responding to each point in a separate blog with the corrective actions to each topic that results on “credible” estimates at the appropriate phases of the project. I’d suggest Steve McConnell’s work on estimating SW development using agile processes is another good palce to start to see each of the article points has a easy fix

But estimates are never Wrong, they are only degrees of precise and accurate

Laurent Thomas

Great article. Just to be pedantic : the 1M$ figure in point 5 (Guesstimates anchor perception) is not the median but the mode. The first number that comes to mind is (usually) also the most frequent one. Moreover the good old PERT considers the Beta Distribution more accurate compared to the Weibull, especially when doing a Monte Carlo experiment to compute the range estimate of the total duration.
I also think that cognitive biases are a real annoyance and should be considered by anyone producing an estimate (anchoring is one of these biases, confirmation bias, expectation bias and so on are others which merit to be known). Agility beeing human centric would certainly greatly benefit from taking human biases into account and render them more visible.


Leave a Reply

Your email address will not be published. Required fields are marked *