Monday

Tag Archives: Project

The evolution of AI

In our previous blog, AI is coming to a project near you!, we identified a large number of project management software applications using Artificial Intelligence (AI) and the rapid spread of the capability. But what exactly is AI? This post offers a brief overview of the concept.   

AI is not as new as some people imagine. Some of the mathematics underpinning AI can be traced back to the 18th century and many of the fundamental concepts were developed in the 20th, but there was very limited use of AI. The ability to make widespread practical use of AI required the development of computers with sufficient processing capabilities to process large amounts of data quickly.  Each of the developments outlined below were enabled by better processors and increased data storage capabilities.

Types of AI

The modern concept of intelligent processing is more than 50 years old, but the way a computer application works depends on the design of the application.  Very broadly:

Decision tables have been in software since the 1960s, the decision table applies a cascading set of decisions to a limited set of data to arrive at a result.  The ‘table’ is hard-wired into the code and does not change. Many resource levelling algorithms are based on decision tables to decide what resources get allocated to which activities on each day.

Expert systems, known as rule-based systems in the 1960s, use explicitly crafted rules to generate responses or data sets based on an initial question. These systems were the basis of many automated Chatbots and help systems. The system’s rules are ‘hard-wired’ and do not change without external intervention.

Data mining was developed in the 1990s. The application uses generalized instructions to look at large volumes of data to discover previously unknown properties. Generalized processes means the data being examined does not need to be labelled or predefined. The application works out a suitable structure, and then draws classes information from within the data set. These system can be interactive, but are not self-learning. Extracting knowledge from ‘big data’ supports Business Intelligence and other business and marketing improvement initiatives.

Machine learning (ML). Basic ML is similar to data mining. It concerned with the development and study of statistical algorithms that can effectively generalize to perform tasks without explicit instructions. ML focuses on prediction and recognition, based on properties the application has learned from training data. The basic functions of ML were defined in the early 2000s and the concept continues to evolve and develop. The basic ML approach can be seen in a range of project management tools where the application recommends durations, lists likely risks, or performs other predictive assessments on your current project, based on data from previous projects.

Algorithmic Decision Making, a subset of ‘expert systems’ focused on using conventional machine learning and statistical techniques such as ordinary least squares, logistic regression, and decision trees to automate traditional human-based decision-making processes. Research suggests, well designed algorithms are less biased and more accurate than the humans they are replacing, improving predictions and decisions, but care is needed; they can also perpetuate blind spots, biases and be built to be fundamentally unfair.

Generative AI (Gen AI) extends the capability of ML. Gen AI uses generative artificial neural networks and ‘deep learning’ to deliver enhanced performance. Gen AI has been applied to large language models (LLM), computer vision, robotics, speech recognition, email filtering, agriculture, medicine, and many other fields. Each branch of development takes the basic principles of Gen AI and adapts them to the specific needs of the researchers. Latest trends are linking different strands of Gen AI to create new things such as generating pictures from verbal descriptions, and linking Gen AI to the IoT (Internet of Things) and additive manufacturing functions to produce computer designed ‘stuff’.

Large language models (LLM) are the branch of generative AI with most direct relevance to project management. LLM uses deep learning algorithms that can perform a variety of natural language processing (NLP) tasks. They are trained using massive datasets which enables them to recognize, translate, predict, or generate text or other content. LLM applications must be pre-trained and then fine-tuned so that they can solve text classifications, answer questions, summarize documents, and generate text, sound, or images. The challenge with LLM is in the training materials, the system only knows what it has been taught. This branch of Gen AI burst into prominence with the development of ChatGPT. Its developer, OpenAI (a research company), launched ChatGPT on November 30, 2022 – a year later and ChatGPT has world-wide attention.

LLM underpins most of todays advanced AI applications and can generate content across multiple types of media, including text, graphics, and video. While early implementations have had issues with accuracy and bias the tools are improving rapidly.

Progress to date indicates that the inherent capabilities of Gen AI will fundamentally change enterprise technology, and how businesses operate. Rather than simply requiring technical skills, employees will also need critical thinking, people skills, and a good understanding of ethics and governance processes.

In the four months from January to April 2023 the percentage of employees in Australia using Gen AI grew from 10% to 35%[1].  This rapid growth in use raises concerns around safety, privacy and security but businesses that do not explore the use of Gen AI in their organisations or industry risk being left behind.

The technological world is becoming very closely integrated:

Source: ACS Australia’s Digital Pulse | A new approach to building Australia’s technology skills – click to expand.

For more on the application of AI to project management software see: https://mosaicprojects.com.au/PMKI-SCH-033.php#AI

Our next post will look at the use of LLM in project management.


[1] Australian Computer Society research 2023. Australia’s Digital Pulse | A new approach to building Australia’s technology skills

Baked In Optimism – Why so many projects fail

This webinar presented as part of the free PGCS 2023 Webinar Series looked at two processes that are ‘baked into’ standard project management estimating and control to show how recommended good practices are still optimistically biased.

  • When preparing an estimate good practice recommends using Monte Carlo to determine an appropriate contingency and the level of risk to accept. However, the typical range distributions used are biased – they ignore the ‘long tail’.
  • When reporting progress, the estimating bias should be identified and rectified to offer a realistic projection of a project outcome. Standard cost and schedule processes typically fail to adequately deal with this challenge meaning the final time and cost overruns are not predicted until late in the project.

This webinar highlighted at least some of the causes for these problems. Solving the cultural and management issues is for another time. Download the PDF of the slides, or view the webinar at: https://mosaicprojects.com.au/PMKI-PBK-046.php#Process2

Detail can be the enemy of useful

It seems nothing is fixed if you look closely enough, a few months ago we posted The Planning Paradox How much detail is too much? Which looked at the ‘coastline paradox’, which in summary states, the smaller your measurement unit, the longer a coastline becomes.  This post is a corollary – the fact a reference point moves does not invalidate the reference.

Every height in the in the Great Britain is stated as a height above (or below) the Ordnance Datum Newlyn (ODN) – Ireland uses a different datum. The ODN is defined as the mean sea level as recorded by the tidal gauge at Newlyn in Cornwall between 1915 and 1921, and is marked by a brass bolt fixed to the harbour wall in Newlyn.

(a) The brass bolt benchmark (OS BM 4676 2855) which is located in the Tidal Observatory and from which the ODN national datum is defined as being 4.751 m below the mark, and
(b) the cover of the historic mark.

While ODN was a measurement of mean sea level in 1915-1921, it is important to recognize that mean sea level has risen since then, so it is best to think of the ODN as an arbitrary height reference point that has been used for the past 100 years rather than a reflection of the actual current mean sea level.

Given the Cornish peninsula is made of solid granite, as is the harbour wall, Britain has a fixed reference point for all levels, so everything should be good….. or is it?

The problems start when you measure heights from space using accurate GPS, these measurements show that the whole Cornish Peninsular rises and falls by several centimeters twice every day in response to the tidal loading caused by the very high tidal range in the region moving gigatons of water in and out of the English and Bristol Channels. While moving up and down by as much as 13 cm sounds dramatic, and is measurable, the deflection is very small when compared to the scale of the earth. In contrast, an error of 13 cm in an engineering survey would be very significant.

The effect of tidal loading is not restricted to Cornwall, an academic paper by P. J. Clarke and N. T. Penna has determined ocean tide loading (OTL) affects all parts of the British Isles to varying degrees, causing peak-to-peak vertical displacements of up to 13 cm in South-West England (Cornwall), reducing to a few millimeters as you move further inland. OTL also causes lateral displacements as the earth’s crust flexes, these are typically around one-third of the magnitude of vertical displacements.

All of this flexing means there will be measurable differences between surveys done using the Global Navigation Satellite System (GNSS) and those based on the Ordnance Survey datums across the UK, And the difference will change continually throughout the day. The differences are calculable but which reference system is correct? Most modern survey equipment uses GNSS, but the location of everything shown on maps and plans is based on measurements derived from the Ordnance Survey datums.  

Combine this with the ‘continental drift’ discussed in Knowing (exactly) where you are is not that simple! and the challenge of creating an accurate survey becomes apparent.

The first question is which reference point really matters for what you are doing.  Most terrestrial surveys are positioning things on land, property boundaries, foundation levels, etc.  Given all of the ‘land’ in a location will be moving more-or-less as a single unit, measurements from the datums fixed to the land are usually going to be the most useful. It is only if you are operating at a global level, the GNSS data becomes more useful. Surveyors the world over use equipment based on GNSS (it makes their job much easier), but typically reference their equipment’s datum back to a local survey mark – they calculate the relative differences based on this fixed reference point – the fact everything is moving becomes irrelevant.   

So, what has all this got to do with project controls?  I would suggest two things:

  1. First excessive detail is the enemy of useful information.  When you are using a map on a hike, you don’t care if the reference points a moving a few centimeters, you just want to know how many miles to the next pub!
     
  2. Second, you need a valid reference point and metric. If you are looking a measuring ‘velocity’ in a project using Scrum, or productive effort in an engineering facility, using hours of effort are likely to be more meaningful than the overall cost which can be distorted by the variable pay rates earned by different people.  But, if you are measuring the overall viability or profitability of a project, then the overall costs do matter.

For more on project controls see: https://mosaicprojects.com.au/PMKI-SCH.php

This post is part of a series looking at The Origins of Numbers, Calendars and Calculations used in project management:  https://mosaicprojects.com.au/PMKI-ZSY-010.php#Overview

Assessing Delay – the SCL Options.

Our latest paper Assessing Delay – the SCL Options. Has been published in the April edition of PM World Journal.

This paper reviews The Society of Construction Law, Delay and Disruption Protocol (2nd edition), and contrasts the SCL Protocol with AACE® International Recommended Practice No. 29R-03 Forensic Schedule Analysis (2011 Ed.)

In most respects, the two documents take a very similar approach to assessing delay and disruption on construction projects. The fundamental difference is in the focus of the documents, the objective SCL Protocol is to provide useful guidance on some of the common delay and disruption issues that arise on construction projects, with a view to minimizing disputes, whereas AACEi 29R-03 focuses on forensically analyzing delays after the dispute has arisen.

To download more papers focused on delay, disruption and acceleration see: https://mosaicprojects.com.au/PMKI-ITC-020.php#ADD

Assessing Delay & Disruption – Papers updated

In preparation for the publication of a new paper Assessing Delay – the SCL Options in the April edition of PM World Journal it has been necessary to review and update several of the existing Mosaic papers focused on forensic analysis. These updated papers are available for immediate download.

The major updates are to:

Assessing Delay and Disruption – Tribunals Beware. This paper, based on the AACE® International Recommended Practice No. 29R-03 Forensic Schedule Analysis. It:
– Describes the origins, strengths and weaknesses of ‘Critical Path’ scheduling.
– Outlines the current ‘state of play’ with regards to the practice of scheduling.
– Describes the primary approaches to delay analysis, their strengths and weaknesses, including:
    – As-Built v As-Planned
    – Impacted As-Planned
    – Collapsed As-Built
    – Window Analysis and its variant, Time Impact Analysis.
– Describes the type of record needed to support the delay analysis.

Delay, Disruption and Acceleration Costs. This paper examines the theoretical underpinnings of ‘delay and disruption’ costs to suggest a realistic basis for their calculation. It is designed to help non-experts see through the ‘smoke and mirrors’ of schedule claims to understand what is likely to be real, what is feasible, and what’s hyperbole.

Independent, Serial and Concurrent Delays. This White Paper provides an overview of the differences between independent, serial, and concurrent delays and the options for assessing the effect of concurrent delays.

New blogs and articles developed as part of the research are:

Concurrent Delays – UK High Court Decision Supports SCL Protocol. This article discusses an English High Court decision supporting the approach to concurrent delays advocated in the Society of Construction Law Delay and Disruption Protocol and our White Paper (above). This judgement is likely to be influential in the UK, Australia and most Commonwealth countries, requiring expert assessment and analysis to be founded in common sense

Delivering Expert Evidence is Becoming Harder. This article discusses a number of recent judgements that seem to have re-framed expert evidence, ‘the court is not compelled to choose only between the rival approaches and analyses of the experts. Ultimately it must be for the court to decide as a matter of fact what [occurred]. ‘…there is an overriding objective of ensuring that the conclusions derived from that analysis are sound from a common-sense perspective’.

Costain vs Haswell Revisited. This judgement has a number of important findings relating to schedule delay analysis including:

1.  It is necessary to prove the delay event caused a delay to completion (a challenge for a Windows approach to delay assessment).

2.  A CPM schedule is unlikely to provide a sound basis for delay assessment in agile and distributed projects.

This work and the publication in April of Assessing Delay – the SCL Options is part of a larger project to develop a controls paradigm for Assessing Delays in Agile & Distributed Projects. The internationally recognized approaches to assessing delay and disruption discussed in the papers above, are based on the premise there is a well-developed critical path schedule that defines the way the work of the project will be accomplished. Therefore, events that delay or disrupt activities in the schedule can be modelled using this schedule, their effect assessed, and responsibility for the assessed delay assigned to the appropriate party.  The focus of this paper will be to offer a practical solution to the challenge of assessing delay and disruption in agile and distributed projects where the traditional concept of a ‘critical path’ simply does not exist and the effect of intervening events has to be considered in terms of loss of resource efficiency.

 For more papers focused on Claims and Forensic Analysis see: https://mosaicprojects.com.au/PMKI-ITC-020.php

Three Project Conferences

#1 PGCS Canberra 22nd to 24th August – Registrations are now open and the call for papers still open, For more see: https://www.pgcsymposium.org.au/

#2 Talk Around The Clock. Contribute to help raise money for earthquake victims, see: https://talk-around-the-clock.com/event-schedule

#3 PM College of Scheduling, 23rd to 26th April, see: https://pmcos.org/events/pmcos-annual-conference-las-vegas-2023/

Costain vs Haswell Revisited

One of the ways the law tries to maintain consistency across multiple court cases in literally hundreds of court rooms is by following the same decision-making process used in previous cases to decide an outcome where similar matters are in dispute. This has the advantage of providing a degree of certainty, or at least consistency in the way laws and contracts are interpreted. But can make the law relatively slow to change when business practice changes. However, there are times when the Judges identify problems well before the practitioners! Costain Ltd v Charles Haswell & Partners Ltd [2009] EWHC B25 (TCC) (24 September 2009) is one example. This case related to the construction of the eleven separate structures, that constitute the Lostock and Rivington Water Treatment Works in Lancashire, UK.

As part of this court case, the design and construction contractor, Costain Limited, sought costs from its consulting civil engineer, Charles Haswell & Partners Ltd (Haswell), for the cost of delays caused by incorrect geotechnical advice provided by Haswell. Costain alleged that Haswell’s original design for pre-foundation ground treatment works failed to achieve the specified design criteria for two of the eleven project structures. This resulted in the need for unplanned piling works to support the two structures, which Costain alleged caused a critical delay to the project. As a consequence, Costain was seeking to recover the costs of the delay (prolongation costs) from Haswell.

The quantum experts in the case agreed on two tests for establishing Costain’s entitlement to prolongation costs:

  • First, whether the assumed delay to completion caused by the remedial piling had crystallised into the same actual delay to the completion of the project some sixteen months later, and
  • Second, whether all of the project’s activities were delayed by the piling to just two of the eleven structures.

The parties’ programming experts agreed on a common methodology for assessing the delay, which the judgment refers to as a ‘time impact analysis’ or ‘windows slice analysis’. The method described in the judgement appears the same as the Time Impact Analysis defined in the SCL Delay and Disruption Protocol and AACEi MIP 3.7 (for more detail on this see: https://mosaicprojects.com.au/PDF_Papers/P216_Assessing_Delay_The_SCL_Options.pdf).

There were some points of disagreement between the experts but ultimately, the Court found that the remedial piling on two structures was critical to the project at the time considered in the ‘windows analysis’  noting the experts have agreed that the delays to the RGF and IW were critical delays since those buildings were on the critical path of the project at the relevant time.  Ordinarily therefore one would expect, other things being equal, that the project completion date would be pushed out at the end of the job by the same or a similar period to the period of delay to those buildings.  However, as experience shows on construction sites, many supervening events can take place which will falsify such an assumed result.  For example, the Contractor may rearrange his programme so that other activities are accelerated or carried out in a different sequence thereby reducing the initial delays. [Clause 233]

The assumption underpinning the expert’s ‘window’ analysis showing that a critical delay had occurred and the entitlement to a delay was based on the premise that the work on the rest of the project would follow the logic as shown in the CPM network. The Court rejected this assumption because the assumed flow-on of the delay to the overall completion of the works was not demonstrated: ‘I find that it has not been shown by Costain that the critical delay caused to the project by the late provision of piled foundations to the RGF and IW buildings necessarily pushed out the contract completion date by that period or at all’. [Clause 200 (ii)]

The second test asked whether a delay to work on part of the project would cause all of the project’s activities to be prolonged. In considering this test, the Court rejected Costain’s assumption that the remedial piling to two of the structures on the project prolonged all eleven structures: ‘If the contractor establishes [a critical, excusable delay], he is entitled to an extension of time to the whole project including, of course all those activities which were not in fact delayed … But the contractor will not recover the general site overheads of carrying out all the activities on site as a matter of course unless he can establish that the delaying event to one activity in fact impacted on all the other site activities’. [Clause 183-184]

The Court also found no evidence has been called to establish that the delaying events in question in fact caused delay to any activities on site apart from the RGF and IW buildings.  That being so, it follows, in my judgment, that the prolongation claim advanced by Costain based on recovery of the whole of the site costs of the Lostock site, fails for want of proof’. [Clause 185]

Costain failed in its claim for time related prolongation costs and only recovered the additional costs of installing the piled foundations, because ‘In the absence of any analysis between all the operative delays from the start to the finish, which is absent in this case, in my judgment it is simply not possible for the Court to be satisfied on the balance of probabilities that the assumption upon which this part of Costain’s case depends, is correct’. [Clause 235]

Conclusion – Distributed Projects are Different!

The fundamental problem outlined above was caused by the distributed nature of the project work. The Critical Path Method (CPM) assumes there is one best way to accomplish the work of the project and this is described in the schedule. In distributed projects there are multiple different ways the work could be accomplished. Therefore, any delay analysis technique based on the assumption that the sequence of work shown in a CPM schedule is the only way to accomplish the work is unlikely to prove the delay.  A different approach is needed!

We are working on this challenge.

  • Scheduling Challenges in Agile & Distributed Projects defines the problem and classifies four different types of project from a CPM and controls perspective. Using this classification, the Lostock and Rivington Water Treatment Works was a ‘Class 4’ project where a CPM schedule was imposed, but is unlikely to prove effective. Distributed projects fall under Class3, where a detailed CPM schedule is not accepted as an effective controls approach – different processes are needed.

  • Predicting Completion in Agile & Distributed Projects (due for publication in the May edition of PMWJ) will define a process for measuring progress and predicting completion in Class 3 projects.

  • Assessing Delays in Agile & Distributed Projects (due for publication in the June edition of PMWJ) will define a process for reliably determining the effect of delay or disruption in Class 3 projects.  

As work progresses, we will be updating the Schedule control in Agile and Distributed projects section of the Mosaic website and welcome feedback: https://mosaicprojects.com.au/PMKI-SCH-010.php#Issues-A+D  

The full Costain judgement can be downloaded from: https://mosaicprojects.com.au/PMKI-ITC-020.php#Cases

Measuring time

If you want to know who made clocks tick (and why they no longer do) you need our latest article, Measuring time.

This article looks and improvements in the devices used to track the time of day over the last 5000 years and how improvements in the devices used to measure time interacted with the development of calendars, and the appreciation of time both socially and in the management of projects.

Download Measuring time from: https://mosaicprojects.com.au/Mag_Articles/AA031_Measuring_time.pdf  

For more on the Origins of Numbers, Calendars and Calculations see: https://mosaicprojects.com.au/PMKI-ZSY-010.php#Overview

Earned Schedule’s 20th Anniversary – Free ½ Day Webinar 8th March

PGCS in collaboration with the developer of Earned Schedule, Walt Lipke, and seven other international experts will be running a free webinar on 8th March to celebrate the 20th Anniversary of Earned Schedule in the EVM marketplace. The webinar will run twice to make the sessions accessible to everyone, regardless of where you are in the world.

The presenters are:
Walt Lipke:  Earned Schedule, 20 years of innovation, past – present and future  
Kym Henderson:  Validating Earned Schedule, the research and studies  
Keith Heitzman & Patrick Weaver:  Interview with Keith Heitzman (NASA Contractor) 
Robert Van de Velde:  Act Fast, Think Fast: Agile Schedule Performance
Paulo André de Andrade:  Research on a categoriser to enhance expected project duration forecasting performance using ES
Mario Vanhoucke:  A 20-year academic research journey summarized in one presentation
Michael Higgins:   Telling the time in the UK
Yancy Qualls:   Do You Trust Your IMS? (Earned Schedule vs. Traditional IECDs in a forecasting accuracy showdown) 

Registration is free, and each of the presentations will be made available to webinar participants for review after the event.

For more information, including detailed timing of the sessions, and a link to register see: https://www.pgcs.org.au/library1/2023-es-special-event/

Hard -v- Soft Projects

We are working on a couple of paper where a concise definition of hard and soft projects would be helpful.  Most commentary on the subject seems very imprecise and based on tangible v intangible outputs.

Tangible means perceptible by touch, but a piece of artwork (say a painting) can be touched! However, if the creation of the artwork is treated as a project, in almost all other respect the project is a soft project, the same goes for most design projects. The concept of a soft project is one where stakeholder engagement and change are welcome, with a focus on achieving the greatest value or stakeholder satisfaction at completion.

We suggest the primary differentiation between the two, is the various components of a hard project have to literally fit together, this required a detailed design to be finalized for each subassembly, before necessary parts can be procured and assembled. Furthermore, the overall design has to be progressed to a stage where there is a high degree of confidence the subassemblies will fit together into components and the components will fit together to create a final product that functions correctly and meets the specified requirements.

This means a hard project needs the detailed design of each subassembly or component to be completed before the project team can start working on the component and each component has to be built to the design.  Change is a complex and often expensive process.   

In contrast, the detailed design of components in soft projects can be, and very often is, done as part of the work involved in developing the element. While the function of the component is likely to be set in the overall design, how the functionality is delivered is flexible and most changes can be accommodated comparatively easily. In essence, agile is designed to deliver soft projects.  

There is of course the added complication that most hard projects include a significant element of software, and many soft projects include some hardware.

These factors suggest the definition of hard and soft projects should be:

A hard project is one where the majority of its subcomponents require the detailed design of the subcomponent to be finalized before work on the subcomponent commences, and the subcomponent is expected to be built to conform to its design.

A soft project is one where the majority of its subcomponents require the functionality of the subcomponent to be defined before work on the subcomponent commences, but there is significant flexibility in how the required functionality is achieved.

These definitions could be reduced to:

A hard project is one where the majority of the work is dependent on a finalised design being complete for each element of the project, prior to work starting on that element.

A soft project is one where the majority of the work has a degree of flexibility on how the required functionality is achieved.

What do you think??