Monday

Tag Archives: Forecasting

Prediction is very difficult!

“Prediction is very difficult, especially about the future” (Niels Bohr) but project managers, planners and estimators are continually expected to provide management with an ‘accurate prediction’ and suffer the consequences, occasionally even being fired, when their prediction proves to be incorrect.

What is even stranger, most project predictions are reasonably accurate but classed as ‘wrong’ by the ‘managers’ but the same managers are quite happy to believe a whole range of other ‘high priced’ predictions that are consistently far less accurate (perhaps we should change more for our services…).

There seems to be a clear divide in testable outcomes between predictions based on data and predictions based on ‘expertise’, ‘gut feel’ and instinct.

A few examples of ‘expert’ predictions that have gone wrong:

  • Last January the Age Newspaper (our local) assembled its traditional panel of economic experts to forecast the next 12 months. 18 out of 20 predicted the Australian dollar exchange rate would remain below US$1. The actual exchange rate has been above US$1 for most of the last 6 months -10% correct, 90% wrong!
  • The International Monetary Fund (IMF), the European commission, the Economist Intelligence Unit and the Organization for Economic Cooperation and Development all predicted the European economies would contract by $0.50 for every $1.00 reduction in government expenditure and therefore whilst painful, cutting deficit spending would be beneficial. The actual contraction has now been measured by the IMF at $1.50 contraction per dollar, and the reductions in deficit spending are creating more problems that they are solving, particularly in the Euro Zone.
  • Most ‘experts’ predicted a very close race in the last USA Presidential election; the final result was 332 votes for Obama, 206 for Romney.

The surprising fact is that most ‘expert’ predictions are less accurate than a random selection – they are more wrong than pure chance! In his book ‘Expert Political Judgment: How good is it’ Philip Tetlock from Berkeley University tested 284 famous American ‘political pundits’ using verifiable tests (most of their public predictions were very ‘rubbery’). After 82,000 testable forecasts with 3 potential outcomes, he found the expert’s average was worse then if they had just selected a, b, or c in rotation.

The simple fact is most ‘experts’ vote with the crowd. The reward for ‘staying with the pack’ is you keep your job if you are wrong in good company – whereas if you are wrong on your own you carry all of the blame. There are several reasons for this; experts have reputations to protect (agreeing with your peers helps this), they operate within a collegiate group and know what the group believes is ‘common sense’, they are not harmed by their incorrect forecasts, and we are all subject to a range of bias that make us think we know more then we do (for more on bias see: http://www.mosaicprojects.com.au/WhitePapers/WP1069_Bias.pdf).

There are exceptions to this tendency; some forecasters got the USA election right and weather forecasters are usually accurate to a far greater degree than mythology suggests!!

During the 2012 election campaign, whilst the Romney camp was making headlines with ‘experts’ and supportive TV and radio stations predicting a very close contest, Drew Linzer posted a blog in June 2012 forecasting that the result would be 332/206 and never changed it and the New York Times ‘data geek’ Nate Silver also forecast the result correctly.

What differentiates weather forecasters, Linzer and Silver from the traditional ‘experts’ is the fact their predictions are driven by ‘data’, not expert opinion. Basing predictions on data requires good data and good, tested models. Elections are a regular occurrence and the data modelling of voter intentions has been tested over several cycles, forecasting weather is a daily event. For different reasons both sets of models have been the beneficiaries of massive investment to develop and refine their capabilities, the input data is reliable and measureable and the results testable. I suspect over the next year or two, the political pundit espousing their expert opinion on election results will go the same way as using seaweed or the colour of the sky to predict the weather, they will be seen as cute or archaic skills that are no longer relevant.

But how does this translate to predicting project or program outcomes?

  • First cost and schedule predictions based on reliable data are more likely to be accurate than someone’s ‘gut feel’; even if that someone is the CEO! Organisations that want predictable project success need robust PMOs to accumulate data and help build reliable estimates based on realistic models.
  • Second, whilst recognising the point above, it is also important to recognise projects are by definition unique and therefore the carefully modelled projections are always going to lack the rigorous testing that polling and weather forecasting models undergo. There is still a need for contingencies and range estimates.

Both of these capabilities/requirements are readily available to organisations today, all that’s needed is an investment in developing the capability. The challenge is convincing senior managers that their ‘expert opinion’ is likely to be far less accurate then the project schedule and cost models based on realistic data. However, another innate bias is assuming you know better than others, especially if you are senior to them.

Unfortunately, until senior managers wake up to the fact that organisations have to invest in the development of effective time and cost prediction systems and also accept these systems are better then their ‘expert opinion’ project managers, planners and estimators are going to continue to suffer for not achieving unrealistic expectations. Changing this paradigm will require PPPM practitioners to learn how to ‘advise upwards’ effectively, fortunately I’ve edited a book that can help develop this skill (see: Advising Upwards).