Monday

Tag Archives: Estimating

Estimating Updates

Over the last couple of weeks, we have been updating the estimating pages on our website, partly in response to the #NoEstimating idiocy.

There is no way an organization that intends to survive will undertake future work without an idea of the required resources, time, and cost needed to achieve the objective and an understanding of the anticipated benefits – this is an elementary aspect of governance. This requires estimating! BUT there are two distinctly different approaches to estimating software development and maintenance:

1.  Where the objective is to maintain and enhance an existing capability the estimate is part of the forward budgeting cycle and focuses on the size of the team needed to keep the system functioning appropriately.  Management’s objective is to create a stable team that ‘owns’ the application. Methodologies such as Scrum and Kanban work well, and the validity of the estimate is measured by metrics such as trends in the size of the backlog.  For more on this download De-Projectizing IT Maintenance from: https://mosaicprojects.com.au/PMKI-ITC-040.php#Process1

2.  Where the objective is to create a new capability, project management cuts in.  Projects need an approved scope and budget which requires an estimate! The degree of detail in the estimate needs to be based on the level of detail in the scope documents. If the scope, or objectives, are only defined at the overall level, there’s no point in trying to second guess future developments and create an artificially detailed estimate. But, with appropriate data high level estimates can be remarkably useful. Then, once the project is approved, normal PM processes cut in and work well. Some of the sources of useful benchmarking data are included in our update estimating software list at: https://mosaicprojects.com.au/PMKI-SCH-030.php#Cost

The #NoEstimating fallacies include:

The fantasy that software is ‘different’ – its not! All projects have a degree of uncertainty which creates risk. Some classes of project may be less certain than others, but using reliable benchmarking data will tell you what the risks and the range of outcomes are likely to be.

Estimates should be accurate – this is simply WRONG (but is a widely held myth in the wider management and general community)! Every estimate of a future outcome will be incorrect to some degree.  The purpose of the estimate is to document what you thought should occur which provides a baseline for comparing with what is actually occurring. This comparison highlights the difference (variance) between the planned and actual to create management information. This information is invaluable for directing attention towards understanding why the variance is occurring and adjusting future management actions (or budget allowances) to optimize outcomes.

Conclusion

The fundamental flaw in #NoEstimating is its idiotic assumption that an organization that commits funding and resources to doing something without any concept of how long its is going to take, or what it will cost will survive.  Good governance requires the organizational leadership to manage the organization’s assets for the benefit of the organization’s stakeholders. This does not preclude risk taking (in many industries risk taking is essential). But effective risk taking requires a framework to determine when a current objective is no longer viable so the work can be closed down, and the resources redeployed to more beneficial objectives. For more on portfolio management and governance see: https://mosaicprojects.com.au/PMKI-ORG.php  

In summary #NoEstimating is stupid, but trying to produce a fully detailed estimate based on limited information is nearly as bad.  Prudent estimating requires a balance between what is known about the project at the time, a proper assessment of risk, and the effective use of historical benchmarking data to produce a usable estimate which can be improved and updated as better information becomes available.  For more on cost estimating see: https://mosaicprojects.com.au/PMKI-PBK-025.php#Process1

Prediction is very difficult!

“Prediction is very difficult, especially about the future” (Niels Bohr) but project managers, planners and estimators are continually expected to provide management with an ‘accurate prediction’ and suffer the consequences, occasionally even being fired, when their prediction proves to be incorrect.

What is even stranger, most project predictions are reasonably accurate but classed as ‘wrong’ by the ‘managers’ but the same managers are quite happy to believe a whole range of other ‘high priced’ predictions that are consistently far less accurate (perhaps we should change more for our services…).

There seems to be a clear divide in testable outcomes between predictions based on data and predictions based on ‘expertise’, ‘gut feel’ and instinct.

A few examples of ‘expert’ predictions that have gone wrong:

  • Last January the Age Newspaper (our local) assembled its traditional panel of economic experts to forecast the next 12 months. 18 out of 20 predicted the Australian dollar exchange rate would remain below US$1. The actual exchange rate has been above US$1 for most of the last 6 months -10% correct, 90% wrong!
  • The International Monetary Fund (IMF), the European commission, the Economist Intelligence Unit and the Organization for Economic Cooperation and Development all predicted the European economies would contract by $0.50 for every $1.00 reduction in government expenditure and therefore whilst painful, cutting deficit spending would be beneficial. The actual contraction has now been measured by the IMF at $1.50 contraction per dollar, and the reductions in deficit spending are creating more problems that they are solving, particularly in the Euro Zone.
  • Most ‘experts’ predicted a very close race in the last USA Presidential election; the final result was 332 votes for Obama, 206 for Romney.

The surprising fact is that most ‘expert’ predictions are less accurate than a random selection – they are more wrong than pure chance! In his book ‘Expert Political Judgment: How good is it’ Philip Tetlock from Berkeley University tested 284 famous American ‘political pundits’ using verifiable tests (most of their public predictions were very ‘rubbery’). After 82,000 testable forecasts with 3 potential outcomes, he found the expert’s average was worse then if they had just selected a, b, or c in rotation.

The simple fact is most ‘experts’ vote with the crowd. The reward for ‘staying with the pack’ is you keep your job if you are wrong in good company – whereas if you are wrong on your own you carry all of the blame. There are several reasons for this; experts have reputations to protect (agreeing with your peers helps this), they operate within a collegiate group and know what the group believes is ‘common sense’, they are not harmed by their incorrect forecasts, and we are all subject to a range of bias that make us think we know more then we do (for more on bias see: http://www.mosaicprojects.com.au/WhitePapers/WP1069_Bias.pdf).

There are exceptions to this tendency; some forecasters got the USA election right and weather forecasters are usually accurate to a far greater degree than mythology suggests!!

During the 2012 election campaign, whilst the Romney camp was making headlines with ‘experts’ and supportive TV and radio stations predicting a very close contest, Drew Linzer posted a blog in June 2012 forecasting that the result would be 332/206 and never changed it and the New York Times ‘data geek’ Nate Silver also forecast the result correctly.

What differentiates weather forecasters, Linzer and Silver from the traditional ‘experts’ is the fact their predictions are driven by ‘data’, not expert opinion. Basing predictions on data requires good data and good, tested models. Elections are a regular occurrence and the data modelling of voter intentions has been tested over several cycles, forecasting weather is a daily event. For different reasons both sets of models have been the beneficiaries of massive investment to develop and refine their capabilities, the input data is reliable and measureable and the results testable. I suspect over the next year or two, the political pundit espousing their expert opinion on election results will go the same way as using seaweed or the colour of the sky to predict the weather, they will be seen as cute or archaic skills that are no longer relevant.

But how does this translate to predicting project or program outcomes?

  • First cost and schedule predictions based on reliable data are more likely to be accurate than someone’s ‘gut feel’; even if that someone is the CEO! Organisations that want predictable project success need robust PMOs to accumulate data and help build reliable estimates based on realistic models.
  • Second, whilst recognising the point above, it is also important to recognise projects are by definition unique and therefore the carefully modelled projections are always going to lack the rigorous testing that polling and weather forecasting models undergo. There is still a need for contingencies and range estimates.

Both of these capabilities/requirements are readily available to organisations today, all that’s needed is an investment in developing the capability. The challenge is convincing senior managers that their ‘expert opinion’ is likely to be far less accurate then the project schedule and cost models based on realistic data. However, another innate bias is assuming you know better than others, especially if you are senior to them.

Unfortunately, until senior managers wake up to the fact that organisations have to invest in the development of effective time and cost prediction systems and also accept these systems are better then their ‘expert opinion’ project managers, planners and estimators are going to continue to suffer for not achieving unrealistic expectations. Changing this paradigm will require PPPM practitioners to learn how to ‘advise upwards’ effectively, fortunately I’ve edited a book that can help develop this skill (see: Advising Upwards).

Mathematical Modelling of Project Estimates

I have just finished reading a very interesting paper by Dr. Pavel Barseghyan; Problem of the Mathematical Theory of Human work the paper is available from the PM World Today web site.

Dr. Barseghyan’s key message is the unreliability of historical data to predict future project outcomes using simple regression analysis. This is similar to the core argument I raised in my paper Scheduling in the Age of Complexity presented to the PMI College of Scheduling conference in Boston earlier this year. Historical data is all we have but cannot be relied on due to the complexity of the relationships between the various project ‘actors’. As a practitioner, I was looking at the problem from an ‘observed’ perspective it’s fascinating to see rigorous statistical analysis obtaining similar outcomes.

A counterpoint to Dr. Barseghyan’s second argument that improved analysis will yield more correct results is the work of N.N. Taleb particularly in his book ‘The Black Swan’. Taleb’s arguments go a long way towards explaining much of the GFC – models based on historical data cannot predict unknown futures. For more on this argument see: http://www.edge.org/3rd_culture/taleb08/taleb08_index.html 

Personally I feel both of these lines of reasoning need to be joined in the practice of modern project management. We need the best possible predictors of likely future outcomes based on effective modelling (as argued by Dr. Barseghyan). But we also need to be aware that the best predictions cannot control the future and adopt prudent, effective and simple risk management processes that recognise each project is a unique journey into the unknown.

I would certainly recommend reading Dr. Barseghyan’s paper.