Monday

Tag Archives: Risk

Risk Management Update

Mosaic’s risk management pages have been reorganized and updated. All of the papers are available for downloading and use free of charge.  There are also free samples of a couple of useful spreadsheets for assessing risk and planning the management of important risks. 

The risk section of our website is now in two parts:

Risk Management covers the processes involved in the identification and management of risk within a project or program to achieve and maintain a risk profile acceptable to the key stakeholders: https://mosaicprojects.com.au/PMKI-PBK-045.php 

Risk Assessment covers the techniques and tools used to calculate and assess the risk exposure of a project or program: https://mosaicprojects.com.au/PMKI-PBK-046.php

CPM Anomalies Invalidate Monte Carlo

A couple of weeks ago I posted on some of the anomalies in CPM logic that will cause unexpected results: CPM Scheduling – the logical way to error #1. A comment on the post by Santosh Bhat started me thinking about the effect of these logical constructs on risk analysis.

The various arrangement of activities and links shown in CPM Scheduling – the logical way to error #1 (with the addition of a few more non-controlling links) follow all of the scheduling rules tested by DCMA and other assessments. The problem is when you change the duration of a critical activity, there is either no effect or the reverse effect on the overall schedule duration.

In this example, the change in the overall project duration is the exact opposite of the change in the duration of Activity B (read the previous post for a more detailed explanation).  For this discussion, it is sufficient to know that an increase of 2 weeks in the duration of ‘B’ results in a reduction of the overall project duration of 2 weeks (and vice-versa).

The effect these anomalies on the voracity of a Monte Carlo analysis is significant. The essence of Monte Carlo is to analyze a schedule 100s of times using different activity durations selected from a pre-determined range that represents the uncertainty associated with each of the identified risks in a schedule. If the risk event occurs, or is more serious, the affected activity duration in increased appropriately (see more on Monte Carlo). 

In addition to calculating the probability of completing by any particular date, most Monte Carlo tools also generate tornado charts showing the comparative significance of each risk included in the analysis and its effect on the overall calculation.  For example, listing the risks that have the strongest correlation between the event occurring and the project being delayed.  

Tornado charts help the project’s management to focus on mitigating the most significant risks.

When a risk is associated with an activity that causes on of the anomalies outlined in CPM Scheduling – the logical way to error #1 the consequence is a reduction in the accuracy of the overall probability assessments, and more importantly to reduce the significance of the risk in tornado charts. The outcome of the anomalous modelling is to challenge the fundamental basis of Monte Carlo. There are more examples of similar logical inconsistencies, that will devalue Monte Carlo analysis, included in Section 3.5 of Easy CPM.

Easy CPM is designed for schedulers that know how to operate the tools efficiently, and are looking to lift their skills to the next level. The book is available for preview, purchase (price $35), and immediate download, from: https://mosaicprojects.com.au/shop-easy-cpm.php

Probability -v- luck. Should we give up our day-job?

Based on a successful day at the races, 5 winners and one place from 8 bets, this article looks at the balance between luck and process in achieving the result.  Our conclusion is that you should not confuse luck with skill. Good processes will help build success, persistence will generate more opportunities for you to be lucky, and skill or capability will shift the odds in your favour, but randomness rules!

To quote Coleman Cox: I am a great believer in Luck. The harder I work, the more of it I seem to have.

Click to download the PDF.

For more papers on risk and probability see: https://mosaicprojects.com.au/PMKI-SCH-045.php#Process1

The reference case for management reserves

Risk management and Earned Value practitioners, and a range of standards, advocate the inclusion of contingencies in the project baseline to compensate for defined risk events. The contingency may (should) include an appropriate allowance for variability in the estimates modelled using Monte Carlo or similar; these are the ‘known unknowns’.  They also advocate creating a management reserve that should be held outside of the project baseline, but within the overall budget to protect the performing organisation from the effects of ‘unknown unknowns’.  Following these guidelines, the components of a typical project budget are shown below.

PMBOK® Guide Figure 7-8

The calculations of contingency reserves should be incorporated into an effective estimating process to determine an appropriate cost estimate for the project[1]. The application of appropriate tools and techniques supported by skilled judgement can arrive at a predictable cost estimate which in turn becomes the cost baseline once the project is approved. The included contingencies are held within the project and are accessed by the project management team through normal risk management processes. In summary, good cost estimating[2] is a well understood (if not always well executed) practice, that combines art and science, and includes the calculation of appropriate contingencies. Setting an appropriate management reserve is an altogether different problem.

 

Setting a realistic management reserve

Management reserves are an amount of money held outside of the project baseline to ‘protect the performing organisation’ against unexpected cost overruns. The reserves should be designed to compensate for two primary factors.  The first are genuine ‘black swans’ the other is estimating errors (including underestimating the levels of contingency needed).

The definition of a ‘black swan’ event is a significant unpredicted and unpredictable event[3].  In his book of the same name, N.N. Taleb defines ‘Black Swans’ as having three distinct characteristics: they are unexpected and unpredictable outliers, they have extreme impacts, and they appear obvious after they have happened. The primary defence against ‘black swans’ is organisational resilience rather than budget allowances but there is nothing wrong with including an allowance for these impacts.

Estimating errors leading to a low-cost baseline, on the other hand, are both normal and predictable; there are several different drivers for this phenomenon most innate to the human condition. The factors leading to the routine underestimating of costs and delivery times, and the over estimating of benefits to be realised, can be explained in terms of optimism bias and strategic misrepresentation.  The resulting inaccurate estimates of project costs, benefits, and other impacts are major source of uncertainty in project management – the occurrence is predictable and normal, the degree of error is the unknown variable leading to risk.

The way to manage this component of the management reserves is through the application of reference class forecasting which enhances the accuracy of the budget estimates by basing forecasts on actual performance in a reference class of comparable projects. This approach bypasses both optimism bias and strategic misrepresentation.

Reference class forecasting is based on theories of decision-making in situations of uncertainty and promises more accuracy in forecasts by taking an ‘outside view’ of the projects being estimated. Conventional estimating takes an ‘inside view’ based on the elements of the project being estimated – the project team assesses the elements that make up the project and determine a cost. This ‘inside’ process is essential, but on its own insufficient to achieve a realistic budget. The ‘outside’ view adds to the base estimate based on knowledge about the actual performance of a reference class of comparable projects and resolves to a percentage markup to be added to the estimated price to arrive at a realistic budget.  This addition should be used to assess the value of the project (with a corresponding discounting of benefits) during the selection/investment decision making processes[4], and logically should be held in management reserves.

Overcoming bias by simply hoping for an improvement in the estimating practice is not an effective strategy!  Prof. Bent Flyvbjerg’s 2006 paper ‘From Nobel Prize to Project Management: Getting Risks Right[5]’ looked at 70 years of data.  He found: Forecasts of cost, demand, and other impacts of planned projects have remained constantly and remarkably inaccurate for decades. No improvement in forecasting accuracy seems to have taken place, despite all claims of improved forecasting models, better data, etc.  For transportation infrastructure projects, inaccuracy in cost forecasts in constant prices is on average 44.7% for rail, 33.8% for bridges and tunnels, and 20.4% for roads.

The consistency of the error and the bias towards significant underestimating of costs (and a corresponding overestimate of benefits) suggest the root causes of the inaccuracies are psychological and political rather than technical – technical errors should average towards ‘zero’ (plusses balancing out minuses) and should improve over time as industry becomes more capable, whereas there is no imperative for psychological or political factors to change:

  • Psychological explanations can account for inaccuracy in terms of optimism bias; that is, a cognitive predisposition found with most people to judge future events in a more positive light than is warranted by actual experience[6].
  • Political factors can explain inaccuracy in terms of strategic misrepresentation. When forecasting the outcomes of projects, managers deliberately and strategically overestimate benefits and underestimate costs in order to increase the likelihood that their project will gain approval and funding either ahead of competitors in a portfolio assessment process or by avoiding being perceived as ‘too expensive’ in a public forum – this tendency particularly affects mega-projects such as bids for hosting Olympic Games.

 

Optimism Bias

Reference class forecasting was originally developed to compensate for the type of cognitive bias that Kahneman and Tversky found in their work on decision-making under uncertainty, which won Kahneman the 2002 Nobel Prize in economics[7]. They demonstrated that:

  • Errors of judgment are often systematic and predictable rather than random.
  • Many errors of judgment are shared by experts and laypeople alike.
  • The errors remain compelling even when one is fully aware of their nature.

Because awareness of a perceptual or cognitive bias does not by itself produce a more accurate perception of reality, any corrective process needs to allow for this.

 

Strategic Misrepresentation

When strategic misrepresentation is the main cause of inaccuracy, differences between estimated and actual costs and benefits are created by political and organisational pressures, typically to have a business case approved, or a project accepted, or to get on top of issues in the 24-hour news cycle.  The Grattan Institute (Australia) has reported that in the last 15 years Australian governments had spent $28 billion more than taxpayers had been led to expect. A key ‘political driver’ for these cost overruns was announcing the project (to feed the 24-hour news cycle) before the project team had properly assessed its costs.  While ‘only’ 32% of the projects were announced early, these accounted for 74% of the value of the cost overruns.

The Grattan Institute (Australia) has reported that in the last 15 years Australian governments had spent $28 billion more than taxpayers had been led to expect on transport infrastructure projects. One of the key ‘political drivers’ for these cost overruns was announcing the project (to feed the 24-hour news cycle) before the project team had properly assessed its costs.  While ‘only’ 32% of the projects were announced early, these projects accounted for 74% of the value of the cost overruns.

Reference class forecasting will still improve accuracy in these circumstances, but the managers and estimators may not be interested in this outcome because the inaccuracy is deliberate. Biased forecasts serve their strategic purpose and overrides their commitment to accuracy and truth; consequently the application of reference class forecasting needs strong support from the organisation’s overall governance functions.

 

Applying Reference Class Forecasting

Reference class forecasting does not try to forecast specific uncertain events that will affect a particular project, but instead places the project in a statistical distribution of outcomes from the class of reference projects.  For any particular project it requires the following three steps:

  1. Identification of a relevant reference class of past, similar projects. The reference class must be broad enough to be statistically meaningful, but narrow enough to be truly comparable with the specific project – good data is essential.
  2. Establishing a probability distribution for the selected reference class. This requires access to credible, empirical data for a sufficient number of projects within the reference class to make statistically meaningful conclusions.
  3. Comparing the specific project with the reference class distribution, in order to establish the most likely outcome for the specific project.

The UK government (Dept. of Treasury) were early users of reference class forecasting and continue its practice.  A study in 2002 by Mott MacDonald for Treasury found over the previous 20 years on government projects the average works duration was underestimated by 17%, CAPEX was underestimated by 47%, and OPEX was underestimated by 41%.  There was also a small shortfall in benefits realised.

 

This study fed into the updating of the Treasury’s ‘Green Book’ in 2003, which is still the standard reference in this area. The Treasury’s Supplementary Green Book Guidance: Optimism Bias[8] provides the recommended range of markups with a requirement for the ‘upper bound’ to be used in the first instance by project or program assessors.

These are very large markups to shift from an estimate to a likely cost and are related to the UK government’s estimating (ie, the client’s view), not the final contractors’ estimates – errors of this size would bankrupt most contractors.  However, Gartner and most other authorities routinely state project and programs overrun costs and time estimates (particularly internal projects and programs) and the reported ‘failure rates’ and overruns have remained relatively stable over extended periods.

 

Conclusion

Organisations can choose to treat each of their project failures as a ‘unique one-off’ occurrence (another manifestation of optimism bias) or learn from the past and develop their own framework for reference class forecasting. The markups don’t need to be included in the cost baseline (the project’s estimates are their estimates and they should attempt to deliver as promised); but they should be included in assessment process for approving projects and the management reserves held outside of the baseline to protect the organisation from the effects of both optimism bias and strategic misrepresentation.  As systems, and particularly business cases, improve the reference class adjustments should reduce but they are never likely to reduce to zero, optimism is an innate characteristic of most people and political pressures are a normal part of business.

If this post has sparked your interest, I recommend exploring the UK information to develop a process that works in your organisation: http://www.gov.uk/government/publications/the-green-book-appraisal-and-evaluation-in-central-governent

______________________

[1] For more on risk assessment see: http://www.mosaicprojects.com.au/WhitePapers/WP1015_Risk_Assessment.pdf

[2] For more on cost estimating see: http://www.mosaicprojects.com.au/WhitePapers/WP1051_Cost_Estimating.pdf

[3] For more on ‘black swans’ see: /2011/02/11/black-swan-risks/

[4] For more on portfolio management see: http://www.mosaicprojects.com.au/WhitePapers/WP1017_Portfolios.pdf

[5] Project Management Journal, August 2006.

[6] For more on the effects of bias see: http://www.mosaicprojects.com.au/WhitePapers/WP1069_Bias.pdf

[7] Kahneman, D. (1994). New challenges to the rationality assumption. Journal of Institutional and Theoretical
Economics, 150, 18–36.

[8] Green Book documents can be downloaded from: http://www.gov.uk/government/publications/the-green-book-appraisal-and-evaluation-in-central-governent

Risk management handbook published

The Risk Management Handbook edited by Dr. David Hillson (the ‘risk doctor’) is a practical guide to managing the multiple dimensions of risk in modern projects and business.  We contributed Chapter 10: Stakeholder risk management.

The 23 Chapters are a cutting-edge survey of the risk management landscape, providing a broad and up-to-date introduction to risk, with expert guidance on current best practice and cutting-edge insight into new developments within risk management.

For more on the book, see: www.koganpage.com/product/the-risk-management-handbook-9780749478827

The language used to define risks can contribute to failure.

If a risk is going to be adequately managed, it needs to be defined.  Failing to describe the actual risk (or risks) will almost inevitably lead to project failure and will frequently exacerbate the damage.

In recent times, there seems to be an explosion of documents in the public domain, including academic papers (where one would have hoped the reviewers and editors knew better) listing as ‘risks’ factors that cannot ever be risks.  The ‘fact’ hides the real or consequential risks that may be manageable.

Risk 101 – a risk is an uncertainty that may affect a project objective if it occurs. For something to be a risk, there has to be an uncertainty and the uncertainty may have a positive or negative impact on one or more objectives (see more on risk management). Risk management involves balancing the uncertainty, its potential impact and the cost and effort needed to change these for the better. But to do this you need to focus on the uncertainties that can be managed.

One of more frequently miss-described risks is ‘technical complexity’.  The degree of technical difficulty involved in a project is a FACT that can be measured and described!  Some projects such as launching a space rocket are technically complex, other less so; but NASA has a far higher success rate in its rocket launches than most IT departments have in developing successful software applications that achieve their objectives.  The technical difficulty may give rise to consequential risks that need addressing but these risks have to be identified and catalogued if they are going to be managed. Some of the risks potentially arising out of technical complexity include:

  • Inadequate supply of skilled resources in the marketplace / organisation;
  • Management failing to allow adequate time for design and testing;
  • Allowing technicians to ‘design in’ unnecessary complexity;
  • Management failing to provide appropriately skilled resources;
  • Management lacking the skills needed to properly estimate and manage the work;
  • Etc.

Another common risk in many of these pseudo risk lists is ‘lack of senior management support’.  This is a greyer area, the project team’s perception of management support and the actual level of support from senior management may differ. Developing an understanding of the actual attitude of key senior managers requires a methodical approach using tools such as the Stakeholder Circle.  However, even after defining the actual attitude of important senior managers the lack of precision in the risk description will often hide the real risks and their potential solutions or consequences:

  • If there is a real lack of senior management support the project should be cancelled, its probability of failure is greater than 80%. Continuing is simply wasting money.
  • If the problem is senior management failing to understand the importance of the project, this is an issue (it exists) and the solution is directed communication (see more on directed communication). The risk is that the directed communication effort will fail, leading to project failure, this risk needs careful monitoring.
  • If the problem is a project sponsor (or steering committee) who is not committed to project success and/or a sponsor (or steering committee) lacking understanding of his/her role (see more on the role of a sponsor) this is another issue with a solution based in education or replacement. Depending on the approach to resolving the issue (and its guaranteed impact on project success if the issue remains unresolved) the risk is either the necessary education process may not work and/or poor governance and senior management oversight will allow the issue to continue unresolved – these specific risks need to be explicitly described and acknowledged if they are to be managed.

The first step to managing risks effectively is developing a precise description of the actual risk that requires managing. If there are several associated risks, log each one separately and then group them under a general classification.   The description of each risk is best done using a common meta language such as:

  • ‘[Short name]: If a [description of risk] caused by [cause of risk] occurs, it may cause [consequence of occurrence]’. For example:
  • ‘Storms: If a heavy thunderstorm caused by summer heat occurs, it may cause flooding and consequential clean up’.

For each risk you need to:

  • Define the risk category and short name;
  • Describe the risk using an effective ‘risk meta language’;
  • Determine if the risk is an opportunity or threat and quantify its effect;
  • Prioritise the risk using qualitative assessment process;
  • Determine the optimum response;
  • Implement the response and measure its effectiveness (see more on risk assessment).

A simple Excel template such as this can help: http://www.mosaicprojects.com.au/Practical_Risk_Management.html#Tools

Managing issues is similar, the key difference is the consequences of an unresolved issue are certain – the issue is a fact that has to be dealt with (see more on issues management).

There are a number of factors that can cause both risks and issues to be improperly defined, some technical, most cultural. Three of the most important are:

  • Dealing with easy to identify symptoms without looking for the root cause of the risk / issue (see more on root cause analysis).
  • A management culture that does not allow open and honest reporting of risks and issues; preferring to hide behind amorphous descriptions such as ‘technical complexity’ rather than the real risk ‘management’s inability to manage this level of complicated technology’.
  • Failing to allow adequate time to analyse the stakeholder community using tools such as the as the Stakeholder Circle so that the full extent of risks associated with people’s capabilities and attitudes can be understood – these can account for up to 90% of the actual risks in most projects.

Management culture is the key to both allowing and expecting rigorous and honest assessment of risk. One of the key functions of every organisation’s governing body is to design, create and maintain the organisation’s management culture, this is a problem that starts at the top! For more on the roles of governance see: http://www.mosaicprojects.com.au/WhitePapers/WP1096_Six_Functions_Governance.pdf.

Stakeholders and Reputational Risk

Your reputation and your organisation’s reputation are valuable assets. The willingness of others to trust you, their desire to work with you and virtually every other aspect of the relationship between you and your stakeholders is influenced by their perception of your reputation (see more on The value of trust).  But reputations are fragile: they can take a lifetime to build and seconds to lose. Some of the factors influencing them are:

  1. Reputation cannot be controlled: it exists in the minds of others so it can only be influenced, not managed directly.
  2. Reputation is earned: trust is based on consistent behaviour and performance.
  3. Reputation is not consistent: it depends on each stakeholder’s view. One organisation can have many different reputations, varying with each stakeholder.
  4. Reputation will vary: each stakeholder brings a different expectation of behaviour or performance and so will have a distinct perception of reputation.
  5. Reputation is relational: you have a reputation with someone for something. The key question is therefore: ‘with whom, for what?’
  6. Reputation is comparative: it is valued in comparison to what a particular stakeholder experiences or believes in relation to peers, performance and prejudice.
  7. Reputation is valuable: but the true value of reputation can only be appreciated once it is lost or damaged.

Estimating the ‘true value’ of your reputation is difficult and as a consequence decisions on how much to invest in enhancing and protecting your reputation becomes a value judgment rather than a calculation. Your reputation is created and threatened by both your actions and their consequences (intended or not).  Some actions and their effects on your reputation are predictable, others are less so and their consequences, good or bad are even less certain. This is true regardless of your intention; unexpected outcomes can easily cause unintended benefit or damage to your reputation.

Building a reputation requires hard work and consistency; the challenge is protecting your hard earned reputation against risks that can cause damage; and you never know for sure what will cause reputational damage until it is too late – many reputational risks are emergent.

Managing Reputational Risk in Organisations

Because an organisation’s reputation is not easy to value or protect, managing reputational risk is difficult! This is particularly true for larger organisations where thousands of different interactions between staff and stakeholders are occurring daily.

The first step in managing an organisation’s reputational risk is to understand the scope of possible damage, as well as potential sources and the degree of possible disruption. The consequence of a loss of reputation is always the withdrawing of stakeholder support:

  • In the private sector this is usually investor flight and share value decline; these can spiral out of control if confidence cannot be restored.
  • In the public sector this is typically withdrawal of government support to reflect declining confidence.
  • In the professional sector client confidence is vital for business sustainability; a loss of reputation means a loss of clients.

Each sector can point to scenarios where the impact of reputation damage can vary from mild to catastrophic; and whilst the consequences can be measured after the effect they are not always predictable in advance.  To overcome this problem, managing reputation risk for an organisation requires three steps:

  • Predict: All risk is future uncertainty, and an appropriate risk forecasting system to identify reputation risk is required – creative thinking is needed here! The outcomes from a reputational risk workshop will be specific to the organisation and the information must feed directly into the governance process if reputation risk is to be taken seriously (see more on The Functions of Governance).
  • Prepare: Reputation risk is a collective responsibility, not just the governing body’s. All management and operational staff must recognise the organisation’s reputation is important and take responsibility for protecting it in their interaction with stakeholders. The protection of reputation should also be a key element in the organisation’s disaster recovery plans.
  • Protect: A regular vulnerability review will reveal where reputation risk is greatest, and guide actions to prevent possible damage. Each vulnerability must be assessed objectively and actions taken to minimise exposure. Significant risks will need a ‘protection plan’ developed and then implemented and monitored.

Dealing with a Reputational Risk Event

When a risk event occurs, some standard elements needs to be part of the response for individuals and organisations alike. For reputation enhancing risk events, make sure you acknowledge the ‘good luck’ in an appropriately and take advantage of the opportunity in a suitably authentic way. Over-hyping an event will be seen as unauthentic and have a negative effect on reputation; but good news and good outcomes should be celebrated. Reputation threatening risk events need a more proactive approach

  • Step 1: Deal with the event itself. You will not protect your reputation by trying to hide the bad news or ignoring the issue.  Proactively work to solve the problem in a way that genuinely minimise harm for as many stakeholders as possible minimises the damage that has to be managed.
  • Step 2: Communicate. And keep communicating – organisations need to have a sufficiently senior person available quickly as the contact point and keep the ‘news’ coming. Rumours and creative reporting will always be worse then the fact and will grow to fill the void. All communication needs to be open, honest and as complete as possible at the time.  Where you ‘don’t know’ tell people what you are doing to find out. (see Integrity is the key to delivering bad news successfully).
  • Keep your promises and commitments. If this becomes impossible because of changing circumstances tell people as soon as you know, don’t wait for them to find out.
  • Follow up afterwards. Actions that show you really care after the event can go a long way towards repairing the damage to your reputation.

Summary

Reputation is ephemeral and a good reputation is difficult to create and maintain. Warren Buffet in his 2015 memo to his top management team in Berkshire Hathaway emphasised that their top priority must be to ‘zealously guard Berkshire’s reputation’. He also reminded his leadership team that ‘we can afford to lose money–even a lot of money. But we can’t afford to lose reputation–even a shred of reputation’ (discussed in Ethics, Culture, Rules and Governance). In the long run I would suggest this is true for every organisation and individual – your reputation is always in the minds of other people!

Project Risk Management – how reliable is old data?

One of the key underpinnings of risk management is reliable data to base probabilistic estimates of what may happen in the future.  The importance of understanding the reliability of the data being used is emphasised in PMBOK® Guide 11.3.2.3 Risk Data Quality Assessment and virtually every other risk standard.

One of the tenets underpinning risk management in all of its forms from gambling to insurance is the assumption that reliable data about the past is a good indicator of what will happen in the future – there’s no certainty in this processes but there is degree of probability that future outcomes will be similar to past outcomes if the circumstances are similar. ‘Punters’ know this from their ‘form guides’, insurance companies rely on this to calculate premiums and almost every prediction of some future outcome relies on an analogous interpretation of similar past events. Project estimating and risk management is no different.

Every time or cost estimate is based on an understanding of past events of a similar nature; in fact the element that differentiates an estimate from a guess is having a basis for the estimate! See:
–  Duration Estimating
–  Cost Estimating

The skill in estimating both normal activities and risk events is understanding the available data, and being able to adapt the historical information to the current circumstances. This adaptation requires understanding the differences in the work between the old and the current and the reliability and the stability of the information being used. Range estimates (three point estimates) can be used to frame this information and allow a probabilistic assessment of the event; alternatively a simple ‘allowance’ can be made. For example, in my home state we ‘know’ three weeks a year is lost to inclement weather if the work is exposed to the elements.  Similarly office based projects in the city ‘know’ they can largely ignore the risk of power outages – they are extremely rare occurrences. But how reliable is this ‘knowledge’ gained over decades and based on weather records dating back 180 years?

Last year was the hottest year on record (by a significant margin) as was 2014 – increasing global temperatures increase the number of extreme weather events of all types and exceptionally hot days place major strains on the electrical distribution grids increasing the likelihood of blackouts.  What we don’t know because there is no reliable data is the consequences.  The risk of people not being able to get to work, blackouts and inclement weather events are different – but we don’t know how different.

Dealing with this uncertainty requires a different approach to risk management and a careful assessment of your stakeholders. Ideally some additional contingencies will be added to projects and additional mitigation action taken such as backing up during the day as well as at night – electrical storms tend to be a late afternoon / evening event. But these cost time and money…..

Getting stakeholder by-in is more difficult:

  • A small but significant number of people (including some in senior roles) flatly refuse to accept there is a problem. Despite the science they believe based on ‘personal observations’ the climate is not changing…….
  • A much larger number will not sanction any action that costs money without a cast iron assessment based on valid data. But there is no valid data, the consequences can be predicted based on modelling but there are no ‘facts’ based on historical events……..
  • Most of the rest will agree some action is needed but require an expert assessment of the likely effect and the value proposition for creating contingencies and implementing mitigation activities.

If it ain’t broke, don’t fix it???? 

The challenge facing everyone in management is deciding what to do:

  • Do nothing and respond heroically if needed?
  • Think through the risks and potential responses to be prepared (but wait to see what actually occurs)??
  • Take proactive action and incur the costs, but never being sure if they are needed???

There is no ‘right answer’ to this conundrum, we certainly cannot provide a recommendation because we ‘don’t know’ either.  But at least we know we don’t know!

I would suggest discussing what you don’t know about the consequences of climate change on your organisation is a serious conversation that needs to be started within your team and your wider stakeholder community.

Doing nothing may feel like a good options – wait and see (ie, procrastination) can be very attractive to a whole range of innate biases. But can you afford to do nothing?  Hoping for the best is not a viable strategy, even if inertia in your stakeholder community is intense. This challenge is a real opportunity to display leadershipcommunication and  negotiation skills to facilitate a useful conversation.

Extreme Risk Taking is Genetic……

A recent 2014 scientific study, Going to Extremes – The Darwin Awards: sex differences in idiotic behaviour highlights the need for gender diversity.  The class of risk studied in this report is the idiotic risk, one that is defined as senseless risks, where the apparent payoff is negligible or non-existent, and the outcome is often extremely negative and often final. The results suggest that having an ‘all male’ or male dominated decision making group may be a source of risk in itself.

Sex differences in risk seeking behaviour, emergency hospital admissions, and mortality are well documented and confirm that males are more at risk than females. Whilst some of these differences may be attributable to cultural and socioeconomic factors (eg, males may be more likely to engage in contact and high risk sports, and are more likely to be employed in higher risk occupations), sex differences in risk seeking behaviour have been reported from an early age, raising questions about the extent to which these behaviours can be attributed purely to social and cultural differences. This study extends on these studies to look at ‘male idiot theory’ (MIT) based on the archives of the ‘Darwin Awards’. Its hypothesis derived from Women are from Venus, men are idiots (Andrews McMeel, 2011) is that many of the differences in risk seeking behaviour may be explained by the observation that men are idiots and idiots do stupid things…… but little is known about sex differences in idiotic risk taking behaviour.

The Darwin Awards are named in honour of Charles Darwin, and commemorate those who have improved the human gene pool by removing themselves from it in an idiotic way (note the photographs are both of unsuccessful attempts to win an award).  Whilst usually awarded posthumously, (the idiot normally has to kill themselves) the 2014 The Thing Ring award shows there are other options.  Based on this invaluable record of idiotic human behaviour, the study considered the gender of the award recipients over a 20 year period (1995-2014) and found a marked sex difference in Darwin Award winners: males are significantly more likely to receive the award than females.

Of the 413 Darwin Award nominations in the study period, 332 were independently verified and confirmed by the Darwin Awards Committee. Of these, 14 were shared by male and female nominees (usually overly adventurous couples in compromising positions – see: La Petite Mort) leaving 318 valid cases for statistical testing. Of these 318 cases, 282 Darwin Awards were awarded to males and just 36 awards given to females. Meaning 88.7% of the idiots accepted as Darwin Award winners were male!

Gender diversity on decision making bodies may help to reduce this potential risk factor in two ways.  First, by reducing the percentage of people potentially susceptible to MIT. Second, by modifying the social and cultural environment within decision making body, reducing the body’s tendency to take ‘extreme risk decisions’.

One well documented example is the current Federal Government. Given the extremely limited representation of women in the make-up of the current Abbott government, and some of the self-destructive decisions they have made, I’m wondering if there is a correlation. A softer, less aggressive, lower risk approach to implementing many of the policies they have failed to enact may have resulted in a very different outcome for the government.

Predicting Completion

At one level, completing on schedule has been a requirement, enforced to a greater or lesser extent for millennia. In the 1905 English court case Clydebank Engineering and Shipbuilding Co Ltd v Don Jose Ramos Yzquierdo y Castaneda [1905] AC 6; the court was prepared to uphold a ‘liquidated damages’ clause for delivery at the rate of ₤500 per week for each vessel not delivered by the contractors in the contract time. And rather more sever penalties could be imposed by Roman Emperors for late delivery.

As governments do today, the Romans outsourced most of their major works to contractors, with both public accountability and a legal framework as key governance constraints. What was significantly different was the consequences of failure! If a project went badly wrong in Roman times, the responsible public official would suffer a major career limiting event that could affect the prospects of his descendants for generations to come. Whilst the retribution applied to the contractor could be even more serious including death as well as retribution for generations to come.  Applying the Roman approach could give a whole new meaning to the ‘pain share’ part of a modern Alliance contracts…… as well as removing by execution many of the worst performing contractors. Rome was not built in a day but their empire did last for close to 1000 years [Frontinus – A Project Manager from the Roman Empire Era by Walker & Dart (Project Management Journal Vol.42, No.5, 4-16].

However, whilst there was undoubtedly an imperative for timely completion of contracts (projects in today’s terminology), there seems to be little in the way of predictive processes used by managers to assess the current expected completion date prior to the 1950s.

Having said that, I’m as sure that ‘smart people’ would have been assessing the expected completion of any significant ‘body of work’; both during the planning processes and during the course of the work. You simply cannot run a factory profitably if you cannot tell a customer when to expect his order – but predictive assessments and predictive processes are quite different.

Cost management and accounting has documented roots more than 6000 years old (provided you can read clay tablets), with modern book keeping emerging in the 15th century. I have found plenty of evidence of proficient governance and effective cost control on projects in the 17th, 18th and 19th centuries but so far nothing ‘predictive’ (cost or time) until the 20th century. Prior to the 20th century, ‘cost control’ focused on comparing actual costs against the planned cost (a process still common in many organisations).

Similarly, the idea of probability and making calculations about future outcomes from a risk management perspective can be traced back to the 17th century and the work of Newton, Leibniz, Bernoulli and Pascal.  These mathematicians advanced probability to the point where life insurance and annuities could be bought and sold, but again there seems to be little cross over into the realm of predicting project outcomes until the 20th century.

From a time management perspective, William Playfare ‘invented’ graphical statistics (including bar charts) and published a series of different charts in his Commercial and Political Atlas of 1786.

However, whilst Playfair’s charts are detailed and accurate, they only report history; trends and forecasts were not considered (or at least not published).

There is a continuum from these early charts through to the work of Henry Gantt (who is falsely accredited with developing ‘project management’ and ‘bar charts’) some 200 years later (for more on this see: The Origins of Bar Charting).

The most sophisticated of Gantt’s charts described in ‘The Gantt chart a working tool of management’ (Wallace Clark, 1923) shows slippage or acceleration on the overall target production for one batch of parts on one machine, but again this work does not extend to predicting the completion date for the work, or a related set of activities.

From a measurement perspective, the concept of ‘piece rates’ can be traced back to the 16th century (the phrase ‘piece work’ first appears in writing around the year 1549). Piece work requires measurement of performance to calculate a workers pay and record keeping. However, whilst there is ample evidence of people being measured and paid this way for more then 400 years, there is little indication of this information being used to predict outcomes.

Measuring performance was integral to Taylor’s scientific management and the work of Henry Gantt, Chapter 3 of Gantt’s ‘Work Wages & Profits’ focuses on incentives and bonus payments for production work in machine shops. Foremen and worker are to be paid a bonus if they achieve the target production time for a ‘piece’ of work. The bonuses are calculated after the event and nothing in work wages and profits refers to any form or predictive scheduling beyond the usual planning needed for machine loadings. Gantt’s work is the most advanced of any of the options discussed to date, but all of his charts are focused on highlighting problems so that management action could be directed to resolving the problem.

In short, nothing in the documented practice of accounting, ‘bar charting’, or piece rates, or Gantt’s motivational bonuses, were designed to predict the completion date of the work or it finals cost based on performance to date. All of these processes, including Gantt’s, were focused on solid planning and then working to achieve the plan by eliminating problems that caused slippage to make sure the work was accomplished in accordance with the plan (reactive management).

Whilst there would have been very little effort required to take the actual, planned or estimated production rates (minutes per unit) and divide that into the lot (scope of work) to predict when the production lot is going to be finished, no one seems to have taken this step. The start of predictive calculations does not seem to have emerged until operational research people started playing around with the concepts during WW2 (1940s).

Predictive time focused planning emerged at some time in the late 1940s or early 1950s with the development of linear programming in association with OR, which in turn morphed in CPM , PERT, MPM and a number of similar techniques all around the same time in the UK, Europe and the USA. Prior to the 1950s the focus was on ‘how far behind’ is any element of the work; the advent of network based scheduling allowed a process for predicting completion to be developed. Kelley and Walker’s 1959 paper is very clear on this (see Annex in A Brief History of Scheduling). From these time management foundations, PERT Cost emerged, then C/SCSC which in turn grew into EVM and more recently Earned Schedule (see: Predicting Completion).

Today, project controls are expected to predict cost and time outcomes for each project and in both business and government the expectation of forward predictions of profits, incomes and expenditures are normal.

The question posed by this blog is that given the fact astronomers were predicting celestial events from 1000 BC or earlier; and some of the charts and risk assessments we use today were available from at least the 17th century if not earlier, were these concepts used by managers to predict work outcomes?  Or did all of this emerge in the last 60 years??

More precisely, is there documented evidence of someone using current performance to update a plan and predict cost, time or other outcomes before the 1950s?

The evidence I have found to date that suggests predictions are very much a development of the last 60 years is freely available at: http://www.mosaicprojects.com.au/PM-History.html. I would be delighted to be proved wrong!