Monday

Monthly Archives: January 2013

Construction CPM 2013

Tennessee Williams once said “America has only three cities: New York, San Francisco, and New Orleans. Everywhere else is Cleveland.”  Having now been to all four places I can understand the sentiment.

As I write this, the Construction CPM 2013 conference is in full swing and is proving as dynamic as its host city of New Orleans. A strong line-up of speakers focusing on the tools, techniques, art and science of critical path management is supported by an equally diverse and interesting social program.

Many of the interesting concepts and ideas we have encountered will undoubtedly be finding their way into our thinking and writing over the next few months, whilst the papers we have presented so far have been well received with one to go tomorrow. Our presentations are available from:

It’s not all hard work! The networking and social program are always a highlight of Construction CPM and when you overlay the excitement of the French Quarter of New Orleans and you start to really challenge the stamina. The final event last night was a ‘Bourbon on Bourbon Street’ tonight its jazz in ‘Fred’s Nightclub’, both finishing at midnight!

If you are involved in project controls start planning for 2014! If the USA is too far to travel, we have a local Governance and Controls Symposium coming up in Canberra in the 10th April.

ISO 26000, CSR and Stakeholders

Numerous studies have consistently shown that organisations that support overt corporate social responsibility (CSR) activities, either by allowing staff to participate in voluntary work or by donating to charities, or 100s of similar options for giving back to the wider community do better than organisations that do not. It is an established fact that organisations that embrace CSR have a better bottom line and more sustained growth, however, what has not been clear from the various studies is why!

Two options regularly canvassed are:

  • Because the organisation is doing well for other reasons it has the capacity to donate some of the surplus it is generating to the wider community whereas organisations that are not doing so well need to conserve all of their resources. Factor in the effect of taxation and great PR is generated at a relatively low net cost.
  • Because the organisation does ‘CSR’ it enhances its reputation and as a consequence becomes a more desirable place to work and therefore attracts better staff at lower costs and is also seen as a better organisation to ‘do business with’ and therefore attracts better long term partners and customers again at a lower cost than other forms of ‘public relations’ and advertising.

Both of these factors have a degree of truth about them and frankly, if an organisation does not seek to maximise any competitive advantage its management are failing in their duties. However, this post is going to suggest these are welcome collateral benefits and the reason CSR is associated with high performance organisations lays much deeper.

We suggest that observable CSR is a measurable symptom of ‘good governance’. The Chartered Institute of Internal Auditors define governance in the following terms:
Governance is about direction, structure, process and control, it also is about the behaviour of the people who own and represent the organisation and the relationship that the organisation has with society. Key elements of good corporate governance therefore include honesty and integrity, transparency and openness, responsibility and accountability.

Consequently, a well governed organisation will generally have a good reputation in the wider community; this is the result of the organisation’s stakeholders giving that organisation credibility and loyalty, trusting that the organisation makes decisions with the good of all stakeholders in mind. It can be summarised as the existence of a: a general attitude towards the organisation reflecting people’s opinions as to whether it is substantially ‘good’ or ‘bad’. And this attitude is connected to and impacts on the behaviour of stakeholders towards the organisation which affects the cost of doing business and ultimately the organisation’s financial performance.

Therefore, if one accepts the concept that the primary purpose of an organisation of any type is to create sustainable value for its stakeholders and that a favourable reputation is a key contributor to the organisation’s ability to create sustainable value. The importance of having a ‘favourable reputation’ becomes apparent, the reputation affects stakeholder perceptions which influence the way they interact with the business – and a favourable reputation reduces the cost of ‘doing business’.

However, whilst a well governed organisation needs, and should seek to nurture this favourable reputation, it is not possible to generate a reputation directly. The organisation’s reputation is created and exists solely within the minds of its stakeholders.

As the diagram below suggests, what is needed and how it is created work in opposite directions!

What the organisation needs is a ‘favourable reputation’ because this influences stakeholder perceptions which in turn improve the stakeholder’s interaction with the organisation, particularly as customers or suppliers which has a demonstrated benefit on the cost of doing business. But an organisation cannot arbitrarily decide what its reputation will be.

An organisation’s ‘real reputation’ is not a function of advertising, it is a function of the opinions held by thousands, if not millions of individual stakeholders fed by all of the diverse interactions, communications, social media comments and other exchanges stakeholders have with other stakeholders. Through this process of communication and reflection the perception of a reputation is developed and stored in each individual’s mind. No two perceptions are likely to be exactly the same, but a valuable ‘weight of opinion’ will emerge for any organisation over time. The relevant group of stakeholders important to the business will determine for themselves if the organisation is substantially ‘good’ or ‘bad’. And because the sheer number of stakeholder-to-stakeholder interactions once an opinion is generally ‘held’, it is very difficult to change.

The art of governance is firstly to determine the reputation the organisation is seeking to establish, and then to create the framework within which management decisions and actions will facilitate the organisation’s interaction with its wider stakeholder community, consistent with the organisations communicated objectives.

Authenticity is critical and ‘actions speak louder than words’ – it does not matter how elegant the company policy is regarding its intention to be the organisation of choice, for people to work at, sacking 500 people to protect profits tells everyone:

  1. The organisation places short term profits ahead of people.
  2. The organisations communications are not to be trusted.

The way a valuable reputation is created is through the various actions of the organisation and the way the organisation engages with its wider stakeholder community. Experiencing these interactions create perceptions in the minds of the affected stakeholders about the organisation. These perceptions are reinforced by stakeholder-to-stakeholder communication (consistency helps), and the aggregate ‘weight’ of these perceptions generates the reputation.

The role of CSR within this overall framework is probably less important that the surveys suggest. Most telecommunication companies spend significant amounts on CSR but also have highly complex contracts that frequently end up costing their users substantial sums. Most people if they feel ‘ripped off’ are going to weight their personal pain well ahead of any positives from an observed CSR contribution and tell their friends about their ‘bad’ perception.

However, as already demonstrated, actions really do speak louder than words – most of an organisation’s reputation will be based on the actual experiences of a wide range of stakeholders and what they tell other stakeholders about their experiences and interactions. Starting at Board level with governance policies that focus on all of the key stakeholder constituencies including suppliers, customers, employees and the wider community is a start. Then backing up the policy with effective employment, surveillance and assurance systems to ensure the organisation generally ‘does good’ and treats all of its stakeholders well and you are well on the way. Then from within this base, CSR will tend to emerge naturally and if managed properly becomes the ‘icing on the cake’.

In short, genuine and sustained CSR is a symptom of good governance and a caring organisation that is simply ‘good to do business with’.

Unfortunately, the current focus on CSR will undoubtedly tempt organisations to treat CSR as just another form of advertising expenditure and if enough money is invested it may have a short term effect on the organisation’s reputation – but if it’s not genuine it won’t last.

One resource to help organisations start on the road to a sustainable culture of CSR is ISO 26000: 2010 – Social responsibility.  The Standard helps clarify what social responsibility is, helps businesses and organisations translate principles into effective actions and shares best practices relating to social responsibility. This is achieved by providing guidance on how businesses and organisations can operate in a socially responsible way which is defined as acting in an ethical and transparent way that contributes to the health and welfare of society. Figure 1 provides an overview of ISO 26000.

Interestingly, my view that understanding who the organisation’s stakeholders really are and engaging with them effectively is the key to success, is also seen as crucial by the standard developers! For more on stakeholder mapping see: http://www.stakeholdermapping.com

Conclusion

This has grown into a rather long post! But the message is simple: Effective CSR is a welcome symptom of an organisation that understands, and cares about its stakeholders and this type of organisation tends to be more successful than those that don’t!

Looking to earn PDUs / CPD in a great learning environment?

A few interesting options to consider:

  1. The Construction CPM Conference in New Orleans – http://www.constructioncpm.com/
    Its not too late – we fly out Thursday to enjoy a couple of days in the ‘Big Easy’ before the kick off Sunday evening (27th Jan.)
    .
  2. The Project Zone Congress in Frankfurt 18th & 19th March – http://www.projectzonecongress.org
    One of Europe’s leading international conference and the good news – readers of this blog can claim a 10% discount by enter the code PZ2012_MEDIA02E0AC81 into the discount code field.
    I cannot make this year but we are planning on being there in 2014.
    .
  3. A specialised and highly focused Governance and Controls Symposium to be held in Canberra on 10th April – http://wired.ivvy.com/event/GCSM13/. The call for papers still open and if you are serious about the contribution of project controls to value creation this is a must be at event.
    .
  4. Later in the year PMOz in Melbourne, 17th & 18th September – Australian’s best PM networking conference – http://www.pmoz.com.au/

What’s the Probability??

The solution to this question is simple but complex….

There is a 1 in 10 chance the ‘Go Live’ date will be delayed by Project 1
There is a 1 in 10 chance the ‘Go Live’ date will be delayed by Project 2
There is a 2 in 10 chance the ‘Go Live’ date will be delayed by Project 3

What is the probability of going live on March 1st?

To understand this problem let’s look at the role of dice:

If role the dice and get a 1 the project is delayed, any other number it is on time or early.
If you role 1 dice, the probability is 1 in 6 it will land on 1 = 0.1666 or 16.66% therefore there is a 100 – 16.66 = 83.34% probability of success.

Similarly, if you roll 2 dice, there are 36 possible combinations, and the possibilities of losing are: 1:1, 1:2, 1:3, 1:4, 1:5, 1:6, 6:1, 5:1, 4:1, 3:1, 2:1. (11 possibilities)

The way this is calculated (in preference to using the graphic) is to take the number of ways a single die will NOT show a 1 when rolled (five) and multiply this by the number of ways the second die will NOT show a 1 when rolled. (Also five.) 5 x 5 = 25. Subtract this from the total number of ways two dice can appear (36) and we have our answer…eleven.
(source: http://www.edcollins.com/backgammon/diceprob.htm)

Therefore the probability of rolling a 1 and being late are 11/36 = 0.3055 or 30.55%, therefore the probability of success is 100 – 30.55 = 69.45% probability of being on time.

If we roll 3 dice we can extend the calculation above as follows:
The number of possible outcomes are 6 x 6 x 6 = 216
The number of ways not to show a 1 are 5 x 5 x 5 = 125

Meaning there are 216 combinations and there are 125 ways of NOT rolling a 1
leaving 216 – 125 = 91 possibilities of rolling a 1
(or you can do it the hard way: 1:1:1, 1:1:2, 1:1:3, etc.)

91/216 = 0.4213 or 42.13% probability of failure therefore there is a
100 – 42.13 = 57.87% probability of success.

So going back to the original problem:

Project 1 has a 1 in 10 chance of causing a delay
Project 2 has a 1 in 10 chance of causing a delay
Project 3 has a 1 in 5 chance of causing a delay

There are 10 x 10 x 5 = 500 possible outcomes and within this 9 x 9 x 4 = 324 ways of not being late. 500 – 324 leaves 176 ways of being late. 176/500 = 0.352 or a 35.2% probability of not making the ‘Go Live’ date.
Or a 100 – 35.2 = 64.8% probability of being on time.

The quicker way to calculate this is simply to multiply the probabilities together:

0.9 x 0.9 x 0.8 = 64.8%

These calculations have been added to our White Paper on Probability.

A Technical question for the risk experts??

Three schedule activities of 10 days duration each need to be complete before their outputs can be integrated.

Activity 1 & 2 both have a 90% probability of achieving the estimated duration of 10 days.

Activity 3 has an 80% probability of achieving the 10 days.

Scenario 1:

The three activities are in parallel with no cross dependencies, what is the probability of the integration activity starting on schedule?

Possible solution #1

There is a 10% probability of the start being delayed by Activity 1 overrunning.
There is a 10% probability of the start being delayed by Activity 2 overrunning.
There is a 20% probability of the start being delayed by Activity 3 overrunning.

Therefore in aggregate there is a 40% probability of the start being delayed meaning there is a 60% probability of the integration activity starting on time.

Possible solution #2

The three activities are in parallel and the start of the integration is dependent on all 3 activities achieving their target duration. The probability of a ‘fair coin toss’ landing on heads 3 times in a row is 0.5 x 0.5 x 0.5 = 0.125  (an independent series)

Therefore the probability of the three activities achieving ‘on time’ completion as opposed to ‘late’ completion should be 0.9 x 0.9 x 0.8 = 0.648 or a 64.8% probability of the integration activity starting on time.

Which of these probabilities are correct?

Scenario #2

The more usual project scheduling situation where activities 1, 2 and 3 are joined ‘Finish-to-Start’ in series (an interdependent series). Is there any way of determining the probability of activity 4 starting on time from the information provided or are range estimates needed to deal with the probability of the activities finishing early as well as late?

There is a correct answer and an explanation – see the next post
(its too long for a comment)

Don’t procrastinate about your goals for 2013!

The strategies for achieving goals were purported to have been defined in the “1953 Yale Study of Goals.”  But, as it turns out this study is little more than an often-quoted urban legend that, it appears, was never actually conducted.

However, what’s even stranger is a recent study by professor Dr. Gail Matthews of the Dominican University of California, backs up these ‘mythical’ strategies for achieving goals! The legend is based in fact!! (Read the full report)

So based on real scientific study if you want to fulfil your New Year Resolutions and achieve your goals for 2013 the verdict is in – write down your goals, ensure they are SMART (Specific, Measurable, Actionable, Realistic and Time-framed) and tell your friends or colleagues.

The research shows that that people who write down specific goals for their future are far more likely to be successful than those who have either unwritten goals or no specific goals at all; and that people who wrote down their goals supported by ‘action commitments’, shared this information with a friend, and sent weekly updates to that friend were on average 33% more successful in accomplishing their stated goals than those who merely formulated goals.

Unfortunately, the research did not identify a way to prevent us procrastinating about getting started and actually writing some formulated goals down …… but there are some useful ideas in our White Paper on Personal Time Management that can help.

PMBOK #5 Boosts Stakeholder Management

The publication of the PMBOK® Guide 5th Edition is a major boost for stakeholder management. The introduction of Chapter 13, Project Stakeholder Management as a distinct knowledge area raises the importance of engaging stakeholders to the same level as all other PM ‘knowledge areas’. Ideally the new section would have been placed next to the closely aligned process of communication management but this is not to be – the PMBOK is expanded by adding new chapters to the end.

The four processes follow the familiar PMBOK pattern with a few differences. They are:

  • 13.1 Identify Stakeholders – identifying everyone affected by the work or its outcomes.
  • 13.2 Plan Stakeholder Management – deciding how you will engage with the stakeholders.
  • 13.3 Manage Stakeholder Engagement – communicating with stakeholders and fostering appropriate stakeholder engagement
  • 13.4 Control Stakeholder Engagement – monitoring the overall relationships and adjusting your strategies and plans as needed.

The 5 stages of our Stakeholder Circle’ methodology are embedded within these processes; the key steps in theStakeholder Circle’ are:

  1. Identify – the primary purpose of 13.1 with very similar objectives.
  2. Prioritize – This is mentioned in 13.1 (Identification) without any real assistance on an effective approach to this important task. The PMBOK recognises most projects are going to be resource constrained and should focus its engagement activities on the important stakeholders but that’s all – options to calculate a meaningful prioritisation is missing. See more on prioritisation.
  3. Visualize – This is also included in 13.1 (Identification) based on a simple 2 x 2 matrix. A number of options are listed including power/interest, power/influence, and the influence/impact grids. The Salience model developed by Mitchell, Agle, and Wood 1997 is also mentioned without attribution. In reality to properly understand your stakeholders you need to understand significantly more than two simple aspects of a relationship. The Stakeholder Circle’ diagram was adapted from the Salience model to help teams really appreciate who matters and why. This will be the subject of another post in a couple of day’s time.
  4. Engage – the primary purpose of 13.2 (Plan engagement) and 13.3 (Implementing the communication plan). Separating planning and implementation is a good idea. The planning process uses an engagement matrix similar to the tool built into the Stakeholder Circle’ – However, whilst the PMBOK looks at the attitude of each stakeholder (both current and desired) it omits the key consideration of how receptive the stakeholder is likely to be to project communication. If the stakeholder does not want to communicate with you the challenge of changing his/her attitude is a whole lot harder and the missing priority level lets you know how important this is.
  5. Monitor and Review – whilst this is the focus of 13.4, the assumption of review and adjustment is a statusing process. Our experience suggests the dynamic nature of a stakeholder community requires the whole cycle starting with the identification of new and changed stakeholders to be repeated at regular intervals of 3 or 6 months (or at major phase changes).

Conclusion.

As mentioned at the beginning, the introduction of a separate knowledge area for stakeholder management is a huge advance and should contribute to improving the successful delivery of projects – PMI are to be congratulated on taking this step!

However, unlike most other areas of the PMBOK, the processes outlined in this 5th Edition are likely to be less than adequate for major projects. As soon as there are more than 20 or 30 stakeholders to assess and manage, the tools described in this version will be shown to be inadequate and more sophisticated methodologies will be needed.

Note:
Stocks of the PMBOK® Guide 5th Edition are now in the shops:
Internationally: http://marketplace.pmi.org/Pages/Default.aspx
Australia: http://www.mosaicprojects.com.au/PMP-Pack-LP.html

For other posts on the new PMBOK 5th Edition see: /category/training/pmbok5/

PMBOK® Guide 5th Edition now in stock

Stocks of the new PMBOK® Guide 5th Edition, the Standard for Program Management 3rd Edition and the Standard for Portfolio Management 3rd Edition are now in Australia. These updated standards continue PMI’s efforts to enhance their suite of international standards to remain at the forefront of project management standardisation.

We will be posting more comments after a careful read.  Some initial thoughts are in two earlier posts:
PMBOK 5th Edition some key changes #1
The 5th Edition of the PMBOK gets communication!

For more information and to order these new PMI standards for free delivery in Australia visit: http://www.mosaicprojects.com.au/Books.html#PMI

Note: These new PMI standards are not required for current examinations – PMI will be updating their examinations in Q3 of 2013 to align with the standards and we will be updating our PMP, CAPM and PMI-SP training in Q2 in readiness for the change over.

Value is created by embracing risk effectively

The latest briefing from the real ‘Risk Doctor’, Dr David Hillson #75: RESOLVING COBB’S PARADOX? starts with the proposition: When Martin Cobb was CIO for the Secretariat of the Treasury Board of Canada in 1995, he asked a question which has become known as Cobb’s Paradox: “We know why projects fail; we know how to prevent their failure – so why do they still fail?” Speaking at a recent UK conference, the UK Government’s adviser on efficiency Sir Peter Gershon laid down a challenge to the project management profession: “Projects and programmes should be delivered within cost, on time, delivering the anticipated benefits.” Taking up the Gershon Challenge, the UK Association for Project Management (APM) has defined its 2020 Vision as “A world in which all projects succeed.” The briefing then goes on to highlight basic flaw in these ambitions – the uncertainty associated with various types of risk. (Download the briefing from: http://www.risk-doctor.com/briefings)

Whilst agreeing with the concepts in David’s briefing, I don’t feel he has gone far enough! Fundamentally, the only way to achieve the APM objective of a “world in which all projects succeed” is to stop doing projects! We either stop doing projects – no projects – no risks – no failures. Or approximate ‘no risk’ by creating massive time and cost contingencies and taking every other precaution to remove any vestige of uncertainty; the inevitable consequence being to make projects massively time consuming and unnecessarily expensive resulting in massive reductions in the value created by the few projects that can be afforded.

The genesis of Cobb’s Paradox was a workshop focused on avoidable failures caused by the repetition of known errors – essentially management incompetence! No one argues this type of failure should be tolerated although bad management practices mainly at the middle and senior management levels in organisations and poor governance oversight from the organisation’s mean this type of failing is still all too common. (for more on the causes of failure see: Project or Management Failures )

However, assuming good project management practice, good middle and senior management support and good governance oversight, in an organisation focused on maximising the creation of value some level of project failure should be expected, in fact some failure is desirable!

In a well-crafted portfolio with well managed projects, the amount of contingency included within each project should only be sufficient to off-set risks that can be reasonably expected to occur including variability in estimates and known-unknowns that will probably occur. This keeps the cost and duration of the individual projects as low as possible, but, using the Gartner definitions of ‘failure’ guarantees some projects will fail by finishing late or over budget.

Whilst managing unknown-unknowns and low probability risks should remain as part of the normal project risk management processes, contingent allowances for this type of risk should be excluded from the individual projects. Consequently, when this type of risk eventuates, the project will fail. However, the effect of the ‘law of averages’ means the amount of additional contingency needed at the portfolio level to protect the organisation from these ‘expected failures’ is much lower than the aggregate ‘padding’ that would be needed to be added to each individual project to achieve the same probability of success/failure. (For more on this see: Averaging the Power of Portfolios)

Even after all of this there is still a probability of overall failure. If there is a 95% certainty the portfolio will be successful (which is ridiculously high), there is still a 5% probability of failure. Maximum value is likely to be achieved around the 80% probability of success meaning an inevitable 20% probability of failure.

Furthermore, a focus on maximising value also means if you have better project managers or better processes you set tighter objectives to optimise the overall portfolio outcome by accepting the same sensible level of risk. Both sporting and management coaches understand the value of ‘stretch assignments’ – people don’t know how good they are until they are stretched! The only problem with failure in these circumstances is failing to learn and failing to use the learning to improve next time. (For more on this see: How to Suffer Successfully)

The management challenge is firstly to eliminate unnecessary failures by improving the overall management and governance of projects within an organisation. Then rather than setting a totally unachievable and unrealistic objective that is guaranteed to fail, accept that risk is real and use pragmatic risk management that maximises value. As David points out in his briefing: “Projects should exist in a risk-balanced portfolio. The concept of risk efficiency should be built into the way a portfolio of projects is built, with a balance between risk and reward. This will normally include some high-risk/high-reward projects, and it would not be surprising if some of these fail to deliver the expected value.”

Creating the maximum possible value is helped by skilled managers, effective processes and all of the other facets of ‘good project management’ but not if these capabilities are wasted in a forlorn attempt to ‘remove all risk’ and avoid all failure. The skill of managing projects within an organisation’s overall portfolio is accepting sensible risks in proportion to the expected gains and being careful not to ‘bet the farm’ on any one outcome. Then by actively managing the accepted risks the probability of success and value creation are both maximised.

So in summary, failure is not necessary bad, provided you are failing for the ‘right reason’ – and I would suggest getting the balance right is the real art of effective project risk management in portfolios!

Prediction is very difficult!

“Prediction is very difficult, especially about the future” (Niels Bohr) but project managers, planners and estimators are continually expected to provide management with an ‘accurate prediction’ and suffer the consequences, occasionally even being fired, when their prediction proves to be incorrect.

What is even stranger, most project predictions are reasonably accurate but classed as ‘wrong’ by the ‘managers’ but the same managers are quite happy to believe a whole range of other ‘high priced’ predictions that are consistently far less accurate (perhaps we should change more for our services…).

There seems to be a clear divide in testable outcomes between predictions based on data and predictions based on ‘expertise’, ‘gut feel’ and instinct.

A few examples of ‘expert’ predictions that have gone wrong:

  • Last January the Age Newspaper (our local) assembled its traditional panel of economic experts to forecast the next 12 months. 18 out of 20 predicted the Australian dollar exchange rate would remain below US$1. The actual exchange rate has been above US$1 for most of the last 6 months -10% correct, 90% wrong!
  • The International Monetary Fund (IMF), the European commission, the Economist Intelligence Unit and the Organization for Economic Cooperation and Development all predicted the European economies would contract by $0.50 for every $1.00 reduction in government expenditure and therefore whilst painful, cutting deficit spending would be beneficial. The actual contraction has now been measured by the IMF at $1.50 contraction per dollar, and the reductions in deficit spending are creating more problems that they are solving, particularly in the Euro Zone.
  • Most ‘experts’ predicted a very close race in the last USA Presidential election; the final result was 332 votes for Obama, 206 for Romney.

The surprising fact is that most ‘expert’ predictions are less accurate than a random selection – they are more wrong than pure chance! In his book ‘Expert Political Judgment: How good is it’ Philip Tetlock from Berkeley University tested 284 famous American ‘political pundits’ using verifiable tests (most of their public predictions were very ‘rubbery’). After 82,000 testable forecasts with 3 potential outcomes, he found the expert’s average was worse then if they had just selected a, b, or c in rotation.

The simple fact is most ‘experts’ vote with the crowd. The reward for ‘staying with the pack’ is you keep your job if you are wrong in good company – whereas if you are wrong on your own you carry all of the blame. There are several reasons for this; experts have reputations to protect (agreeing with your peers helps this), they operate within a collegiate group and know what the group believes is ‘common sense’, they are not harmed by their incorrect forecasts, and we are all subject to a range of bias that make us think we know more then we do (for more on bias see: http://www.mosaicprojects.com.au/WhitePapers/WP1069_Bias.pdf).

There are exceptions to this tendency; some forecasters got the USA election right and weather forecasters are usually accurate to a far greater degree than mythology suggests!!

During the 2012 election campaign, whilst the Romney camp was making headlines with ‘experts’ and supportive TV and radio stations predicting a very close contest, Drew Linzer posted a blog in June 2012 forecasting that the result would be 332/206 and never changed it and the New York Times ‘data geek’ Nate Silver also forecast the result correctly.

What differentiates weather forecasters, Linzer and Silver from the traditional ‘experts’ is the fact their predictions are driven by ‘data’, not expert opinion. Basing predictions on data requires good data and good, tested models. Elections are a regular occurrence and the data modelling of voter intentions has been tested over several cycles, forecasting weather is a daily event. For different reasons both sets of models have been the beneficiaries of massive investment to develop and refine their capabilities, the input data is reliable and measureable and the results testable. I suspect over the next year or two, the political pundit espousing their expert opinion on election results will go the same way as using seaweed or the colour of the sky to predict the weather, they will be seen as cute or archaic skills that are no longer relevant.

But how does this translate to predicting project or program outcomes?

  • First cost and schedule predictions based on reliable data are more likely to be accurate than someone’s ‘gut feel’; even if that someone is the CEO! Organisations that want predictable project success need robust PMOs to accumulate data and help build reliable estimates based on realistic models.
  • Second, whilst recognising the point above, it is also important to recognise projects are by definition unique and therefore the carefully modelled projections are always going to lack the rigorous testing that polling and weather forecasting models undergo. There is still a need for contingencies and range estimates.

Both of these capabilities/requirements are readily available to organisations today, all that’s needed is an investment in developing the capability. The challenge is convincing senior managers that their ‘expert opinion’ is likely to be far less accurate then the project schedule and cost models based on realistic data. However, another innate bias is assuming you know better than others, especially if you are senior to them.

Unfortunately, until senior managers wake up to the fact that organisations have to invest in the development of effective time and cost prediction systems and also accept these systems are better then their ‘expert opinion’ project managers, planners and estimators are going to continue to suffer for not achieving unrealistic expectations. Changing this paradigm will require PPPM practitioners to learn how to ‘advise upwards’ effectively, fortunately I’ve edited a book that can help develop this skill (see: Advising Upwards).