Monday

Category Archives: General Project Management

Waterfall is Dead

The PMI 2024 Pulse of the Profession has introduced a framework for categorizing projects based on the management approach being used of: Predictive – Hybrid – Agile.  If generally adopted, this framework will at long last kill of the notion of waterfall as a project delivery methodology.

As shown in our historical research The History of Agile, Lean, and Allied Concepts, the idea of waterfall as a project delivery methodology was a mistake, and its value as a software development approach was limited.

The PMI framework has some problems but the predictive project delivery paradigm is described as focused on schedule, scope, and budget. The projects tend to use a phase-based approach and are plan driven.  This describes most hard projects and many soft projects that are not using an unconstrained agile approach.

For a detailed review of the PMI 2024 Pulse of the Profession report, and how the classification system works see How should the different types of project management be described?, download from: https://mosaicprojects.com.au/Mag_Articles/AA026_How_should_different_types_of_PM_be_described.pdf

For more on project classification see: https://mosaicprojects.com.au/PMKI-ORG-035.php#Class

White Constructions v PBS Holdings Revisited

White Constructions Pty Ltd v PBS Holdings Pty Ltd [2019] NSWSC 1166, involved a claim for delay and costs arising out of a contract to design a sewerage system for a subdivision and submit it for approval. The alleged breach was the failure to create and submit a sewer design acceptable to the approval authority which had the effect of delaying completion of the subdivision, giving rise to a claim for damages by White.

White and PBS both appointed experts to undertake a schedule analysis, and they did agree an ‘as-built’ program of the works but disagreed on almost everything else including the delay analysis method to use, the correct application of the methods, and the extent of the overall project delay caused by the delays in approving the sewer design.

The Judge found:

[Clause 18]      Plainly, both experts are adept at their art. But both cannot be right. It is not inevitable that one of them is right.
[Note: This approach is consistent with the UK court decision of Akenhead J in Walter Lilly & Company Ltd v Mckay [2012] EWHC 1773 (TCC) at [377], “the court is not compelled to choose only between the rival approaches and analyses of the experts. Ultimately it must be for the court to decide as a matter of fact what delayed the works and for how long”. This precedent has been followed on a number of occasions[1].]

[Clause 22]      The expert reports are complex. To the unschooled, they are impenetrable. It was apparent to me that I would need significant assistance to be put in a position to critically evaluate their opinions and conclusions.

[Clause 25]      Under UCPR r 31.54, the Court obtained the assistance of Mr Ian McIntyre (on whose appointment the parties agreed).

[Clause 137]   The major components of the works were:
       • earthworks,
       • roadworks and kerbing,
       • sewerage,
       • electrical and National Broadband Network (NBN) installation,
       • footpaths, and
       • landscaping.,

[Clause 138]   The electrical and NBN installation was contracted to and carried out by an organisation called Transelect. Landscaping was contracted to RK Evans Landscaping Pty Ltd. The as-built program is not in dispute.
[Note: the rest of the work was undertaken by other contractors]

[Clause 184]   White bears the onus of establishing that it suffered loss and the quantum of it.

[Clause 185]   White’s damages are based on delay to the whole project, said to be attributable to the late (underbore) sewer design. This is not the type of subject upon which precise evidence cannot be adduced. [Therefore] It is not a subject which involves the Court having to make an estimation or engage in some degree of guesswork.

[Clause 188]   The descriptions of the methods adopted by Shahady and Senogles respectively are evidently derived from the publication of the United Kingdom Society of Construction Law, the Delay and Disruption Protocol….

[Clause 191]   Mr McIntyre’s opinion, upon which I propose to act, is that for the purpose of any particular case, the fact that a method appears in the Protocol does not give it any standing, and the fact that a method, which is otherwise logical or rational, but does not appear in the Protocol, does not deny it standing.
[Note: this is the same wording as an express statement contained in the Delay and Disruption Protocol]

[Clause 195]   Mr McIntyre’s opinion, upon which I propose to act, is that neither method [used by the parties experts] is appropriate to be adopted in this case.

[Clause 196]   Mr McIntyre’s opinion, upon which I propose to act, is that close consideration and examination of the actual evidence of what was happening on the ground will reveal if the delay in approving the sewerage design actually played a role in delaying the project and, if so, how and by how much. In effect, he advised that the Court should apply the common law common sense approach to causation In effect, he advised that the Court should apply the common law common sense approach to causation referred to by the High Court in March v E & MH Stramare Pty Ltd (1991) 171 CLR 506.

[Clause 197]   The Court is concerned with common law notions of causation. The only appropriate method is to determine the matter by paying close attention to the facts, and assessing whether White has proved, on the probabilities, that delay in the underboring solution delayed the project as a whole and, if so, by how much.

[Clause 198]   This requires it to establish that:
• the whole project would have been completed by 15 July 2016,
• the final sewer approval delay delayed sewer works,
• the sewer works delay prevented non-sewer works from otherwise proceeding, that is, that the programme could not reasonably have been varied to accommodate the consequences of late approval, and
• other works could not have been done to fill downtimes so as to save time later.

[Clause 199]   ……… White has failed to discharge this burden.

Summary

The factors required to prove a delay outlined by the Judge at Clause 198 can be generalised as follows:

  1. The completion date for the project before the delay event occurred has to be known with some certainty.
  2. The delay event has to be shown to cause a delay which flowed through to extend the overall project completion date.
  3. There were not reasonable alternative ways of working that could mitigate the effect of the delay on project completion.

More significant, none of these steps needs a CPM schedule.  The project status and the effect of the disruption on project completion can be assessed based on its effect on the productivity of key resources. This is discussed in Assessing Delays in Agile & Distributed Projects: https://mosaicprojects.com.au/PDF_Papers/P215_Assessing_Delays_In_Agile_+_Distributed_Projects.pdf   


[1]     This approach by the courts is discussed in Delivering Expert Evidence is Becoming Harder: https://mosaicprojects.com.au/Mag_Articles/AA028_Delivering_Expert_Evidence.pdf

The Artificial Intelligence Ecosystem

We have posted a number of times discussing aspects of Artificial Intelligence (AI) in project management, but what exactly is AI?  This post looks at the components in the AI ecosystem and briefly outlines what the various terms mean.

𝗔𝗿𝘁𝗶𝗳𝗶𝗰𝗶𝗮𝗹 𝗜𝗻𝘁𝗲𝗹𝗹𝗶𝗴𝗲𝗻𝗰𝗲: a range of computer algorithms and functions that enable computers and machines to mimic the problem-solving and decision-making capabilities of the human mind.

Automatic Programming: is a technology that enables computers to generate code or write programs with minimal human intervention.

Knowledge Representation: is concerned with representing information about the real world in a way that a computer can understand, so it can utilize this knowledge and behave intelligently.

Expert System: is a computer system emulating the decision-making ability of a human expert. A system typically includes: a knowledge base, an inference engine that applies logical rules to the knowledge base to deduce new information, an explanation facility, a knowledge acquisition facility, and a user interface.

Planning and Scheduling: an automated process that achieves the realization of strategies or action sequences that are complex and must be discovered and optimized in multidimensional space, typically for execution by intelligent agents, autonomous robots, and unmanned vehicles.

Speech Recognition: the ability of devices to respond to spoken commands. Speech recognition enables hands-free control of various devices, provides input to automatic translation, and creates print-ready dictation.

Intelligent Robotics: robots that function as an intelligent machine and it can be programmed to take actions or make choices based on input from sensors.

Visual Perception: enables machines to derive information from, and understand images and visual data in a way similar to humans

Natural Language Processing (NLP): gives computers the ability to understand text and spoken words in much the same way human beings can.

Problem Solving & Search Strategies: Involves the use of algorithms to find solutions to complex problems by exploring possible paths and evaluating the outcomes. A search algorithm takes a problem as input and returns a solution in the form of an action sequence.

𝗠𝗮𝗰𝗵𝗶𝗻𝗲 𝗟𝗲𝗮𝗿𝗻𝗶𝗻𝗴: is concerned with the development and study of statistical algorithms that allow a machine to be trained so it can learn from the training data and then generalize to unseen data, to perform tasks without explicit instructions. There are three basic machine learning paradigms, supervised learning, unsupervised learning, and reinforcement learning.

• Supervised learning: is when algorithms learn to make decisions based on past known outcomes. The data set containing past known outcomes and other related variables used in the learning process is known as training data.

• Unsupervised learning: is a type of machine learning that learns from data without human supervision. Unlike supervised learning, unsupervised machine learning models are given unlabelled data and allowed to discover patterns and insights without any explicit guidance or instruction.

Reinforcement Learning (RL): is an interdisciplinary area of machine learning concerned with how an intelligent agent ought to take actions in a dynamic environment to maximize the cumulative reward.

Classification: a process where AI systems are trained to categorize data into predefined classes or labels.

K-Means Clustering: cluster analysis is an analytical technique used in data mining and machine learning to group similar objects into related clusters.

Principal Component Analysis (PCA): is a dimensionality reduction method used to reduce the dimensionality of large data sets, by transforming a large set of variables into a smaller one that still contains most of the information in the large set.

Automatic Reasoning: attempts to provide assurance about what a system or program will do or will never do based on mathematical proof.

Decision Trees:  is a flow chart created by a computer algorithm to make decisions or numeric predictions based on information in a digital data set.

Random Forest: is an algorithm that combines the output of multiple decision trees to reach a single result. It handles both classification and regression problems.

Ensemble Methods: are techniques that aim at improving the accuracy of results in models by combining multiple models instead of using a single model. The combined models increase the accuracy of the results significantly.

Naive Bayes: is a statistical classification technique based on Bayes Theorem. It is one of the simplest supervised learning algorithms.

Anomaly Detection: the identification of rare events, items, or observations which are suspicious because they differ significantly from standard behaviours or patterns.

𝗡𝗲𝘂𝗿𝗮𝗹 𝗡𝗲𝘁𝘄𝗼𝗿𝗸𝘀: are machine learning (ML) models designed to mimic the function and structure of the human brain and help computers gather insights and meaning from text, data, and documents by being trained to recognising patterns and sequences.

Large Language Model (LLM): is a type of neural network called a transformer program that can recognize and generate text, answer questions, and generate high-quality, contextually appropriate responses in natural language. LLMs are trained on huge sets of data.

Radial Basis Function Networks: are a type of neural network used for function approximation problems. They are distinguished from other neural networks due to their universal approximation and faster learning speed.

Recurrent Neural Networks (RNN): is a type of neural network where the output from the previous step is used as input to the current step. In traditional neural networks, all the inputs and outputs are independent of each other. For example, when it is required to predict the next word of a sentence, the previous words are required and hence there is a need to remember the previous words.

Autoencoders: is a type of neural network used to learn efficient coding of unlabelled data (unsupervised learning). An autoencoder learns two functions: an encoding function that transforms the input data into code, and a decoding function that recreates the input data from the encoded representation.

Hopfield Networks: is a recurrent neural network having synaptic connection pattern such that there is an underlying Lyapunov function (method of stability) for the activity dynamics. Started in any initial state, the state of the system evolves to a final state that is a (local) minimum of the Lyapunov function.

Modular Neural Networks: are characterized by a series of independent neural networks moderated by some intermediary to allow for more complex management processes.

Adaptive Resonance Theory (ART): is a theory developed to address the stability-plasticity dilemma. The terms adaptive and resonance means that it can adapt to new learning (adaptive) without losing previous information (resonance).

Deep Learning:  is a method in artificial intelligence (AI) that teaches computers to process data in a way that is inspired by the human brain. Deep learning models can recognize complex patterns in pictures, text, sounds, and other data to produce accurate insights and predictions. The adjective deep refers to the use of multiple layers in the network.

Transformer Model:  is a neural network that learns context and thus meaning by tracking relationships in sequential data by applying an evolving set of mathematical techniques to detect subtle ways even distant data elements in a series influence and depend on each other.

Convolutional Neural Networks (CNN): is a regularized type of feed-forward neural network that learns feature engineering by itself via filters or kernel optimization.

Long Short-Term Memory Networks (LSTM): is a recurrent neural network (RNN), aimed to deal with the vanishing gradient problem present in traditional RNNs.

Deep Reinforcement Learning: is a subfield of machine learning that combines reinforcement learning (RL) and deep learning.

Generative Adversarial Networks (GAN): is a class of machine learning frameworks for approaching generative AI. Two neural networks contest with each other in the form of a zero-sum game, where one agent’s gain is another agent’s loss.  Given a training set, this technique learns to generate new data with the same statistics as the training set. A GAN trained on photographs can generate new photographs that look at least superficially authentic.

Deep Belief Networks (DBN): are a type of neural network that is composed of several layers of shallow neural networks (RBMs) that can be trained using unsupervised learning. The output of the RBMs is then used as input to the next layer of the network, until the final layer is reached. The final layer of the DBN is typically a classifier that is trained using supervised learning. DBNs are effective in applications, such as image recognition, speech recognition, and natural language processing.

For more discussion on the use of AI in project management see:
https://mosaicprojects.com.au/PMKI-SCH-033.php#AI-Discussion

A Brief History of Agile

The history of agile software development is not what most people think, and is nothing like the story pushed by most Agile Evangelists.

Our latest publication A Brief History of Agile shows that from the beginning of large system software development the people managing the software engineering understood the need for prototyping and iterative and incremental development. This approach has always been part of the way good software is developed.

The environment the authors of the early papers referenced and linked in the article were operating in, satellite software and ‘cold-war’ control systems, plus the limitations of the computers they were working on, did require a focus on testing and documentation – it’s too late for a bug-fix once WW3 has started…..  But this is no different to modern day control systems development where people’s lives are at stake. Otherwise, nothing much has changed, good software is built incrementally, and tested progressively,  

The side-track into ‘waterfall’ seems to have bee started by people with a focus on requirements management and configuration management, both approached from a document heavy bureaucratic perspective. Add the desire of middle-management for the illusion of control and you get waterfall imposed on software developers by people who knew little about the development of large software systems. As predicted in 1970, ‘doing waterfall’ doubles to cost of software development. The fact waterfall survives in some organisations through to the present time is a factor of culture and the desire for control, even if it is an illusion.

The message from history, echoed in the Agile Manifesto, is you need to tailor the documentation, discipline, and control processes, to meet the requirements of the project. Developing a simple website with easy access to fix issues is very different to developing the control systems for a satellite that is intended to work for years, millions of miles from earth.

To read the full article and access many of the referenced papers and third-party analysis see: https://mosaicprojects.com.au/PMKI-ZSY-010.php#Agile

Agile’s Hidden Secret!

The two fundamental questions standard agile metrics cannot answer consistently are:

1.  How far ahead or behind schedule are we currently?

2.  When are we expected to finish?

Most of the tools and techniques used to manage Agile projects are good at defining the work (done, in-progress, or not started) and can indicate if the work is ahead or behind a nominated planned rate of production, but there is no direct calculation of the time the work is currently ahead or behind the required production rate, or what this is likely to mean for the completion of the project. A full discussion of this topic is in Calculating Completion.  However, most project sponsors and clients need to know when the project they are funding will actually finish, they have other people that need to make use of the project’s outputs to achieve their objectives. At present all Agile can offer is an educated assessment based on the project teams understanding of the work.

Work Performance Management (WPM) has been designed to solve this challenge by providing answers to these questions based on consistent, repeatable, and defensible calculations.

WPM is a simple, practical tool that uses project metrics that are already being used for other purposes within the project, to assess progress and calculate a predicted completion date by comparing the amount of work achieved at a point in time with the amount of work needed to have been achieved. Based on this data WPM calculates the project status and the expected completion date assuming the rate of progress remains constant.

Our latest article, WPM for Agile Projects identifies the cause of this information gap in Agile project management, explains the inability of current tools to accurately predict completion and demonstrates how WPM will effectively close this critical information gap.
Download WPM for Agile Projects: https://mosaicprojects.com.au/Mag_Articles/AA040_-_WPM_for_Agile_Projects.pdf

For more on the practical use of WPM, free sample files, and access to the tool see: https://mosaicprojects.com.au/PMKI-SCH-041.php  

The Problem with Waterfall

The term ‘waterfall’ is seen in lots of different posts without any clear definition of what the writers of those posts mean by the term.  The only constant seems to be in each of the writer’s view ‘waterfall’ is not Agile, and generally represents bad project management practice. In summary, the agile advocates view seems to be:

Agile: A well-defined flexible project delivery process, based on the Agile Manifesto, applicable to software development and a wide range of other “soft projects” such as business change. Agile = Good!

Waterfall: Any project delivery process that is not Agile. Waterfall = Bad!

There are many problems with this simplistic viewpoint starting with the fact the concept of ‘waterfall’ had a very short life and with the possible exception of a few, very traditional, software development organizations, no one uses waterfall for anything.

History of Waterfall.

To the best of my knowledge, the first publication to use the term Waterfall was in the 1976 paper Software Requirements: Are They Really a Problem, by T.E. Bell and T.A. Thayer. This paper misrepresented the 1970 paper Managing the development of large software systems, by Dr Winston Royce[1]. Royce proposed an iterative approach to the development of large systems, but Bell and Thayer falsely claimed he supported ‘waterfall’[2].  

Summary diagram from Royce 1970.

The real start of Waterfall was the publication in 1988 of DOD-STD-2167A by the US Department of Defense, which established uniform requirements for the development of software based on the Waterfall approach[3].   

Extract from DOD-STD-2167A

Problems with the Waterfall approach were quickly identified and in 1996 MIL-STD-498 was released by the US Department of Defense to correct the problems. Officially Waterfall was dead and buried but many companies had adopted waterfall and because waterfall projects were slow and subject to delay, hourly paid staff and contractors had a powerful incentive not to change despite many better software development processes being developed starting from the early 1980s.   

Other types of projects and project delivery.

Waterfall was a short-lived software development methodology. The vast majority of projects in the construction, engineering, oil & gas, defence, and aerospace industries use project delivery methods based on the approaches described in A Guide to the Project Management Body of Knowledge (PMBOK® Guide)—Sixth Edition, and a range of other standards. These other projects generally have three phases:

  1. definition phase undertaken by the client organization to define the capabilities of the product being developed
  2. procurement phase where the client selects a delivery agent for the development of the product
  3. delivery phase where the delivery agent builds and delivers the product

The design of the product (ship, building, rocket, etc.) may be undertaken in full or in part during any one of the three phases. A minimum level of design is required to initiate procurement, but for simple buildings and civil engineering projects, it is not unusual for a complete design and specification to be provided by the client.

The procurement phase may be a simple pricing exercise, or a complex, and phased, design process (sometimes even involving the production of working prototypes), with selection being based on the capabilities of the design produced by the successful tenderer.

Then, in many projects, a significant amount of detailed design is still required during the delivery phase, including shop drawings produced by subcontractors and suppliers.

Similarly, the procurement arrangements vary widely. The client may choose to enter into some form of alliance or partnership with the preferred delivery agent based on shared risk and profits, or the client may choose a hard-dollar contract based on a fixed price to deliver a fixed scope, or some other form of contractual arrangement.

The only certainties are that the typical project approaches used for the vast majority of ‘other’ projects bear no resemblance to the waterfall approach, and this ‘other’ classification includes more than two-thirds of the world’s projects by value.

Conclusions

  1. I suggest it is time to follow the US DOD lead from 1994 and bury the concept of ‘waterfall’ – using the name 30 years after it was officially dropped is long enough.
  2. People involved in the ‘Agile’ industry need to wake up to the fact that software development is only one of many types of project. Most of the ‘other’ types of project do not use Agile, and they certainly don’t use waterfall.
  3. Agile and agility are not synonymous – all organisations benefit from a degree of agility, but this has nothing to do with selecting the best project delivery methodology (more on this later).
  4. In the 21st century, Waterfall is not synonymous with over documentation and/or bad project management. There is plenty of bad project management practice around. But bad management needs to be called out for what it is – 99.999% of the time the bad managers are not trying to use waterfall in their work.   

Ditching the concept of waterfall does create a couple of challenges – we all have an understanding what Agile means as a project delivery process, we need similar generally accepted classifications for other types of project delivery – more on this later. Similarly, the bad management practices branded as ‘waterfall’ need to be identified and understood, you cannot improve a bad process until the root cause of the problem is understood.

For more on Agile management see: https://mosaicprojects.com.au/PMKI-ITC-040.php#Process1

Note: THE MYTH OF THE ‘WATERFALL’ SDLC by expands on this post in far greater detail and is highly recommended as a reference: http://www.bawiki.com/wiki/Waterfall.html


[1] Download a copy of the 1970 Royce paper: https://mosaicprojects.com.au/PDF-Gen/Royce_-_Managing_the_development_of_large_software_systems.pdf  See Fig. 10.

[2] Download a copy of the 1976 Bell & Thayer paper: https://mosaicprojects.com.au/PDF-Gen/software_requirements_are_they_really_a_problem.pdf

[3] Download DOD-STD-2167A Defense System Software Development (1988): https://mosaicprojects.com.au/PDF-Gen/DOD-STD-2167A.pdf

Benefits Management

The publication of BS 202002:2023 – Applying benefits management on portfolios, programmes and projects, last year has prompted an update to our Value and Benefits Realization page, including a link to the new Standard’s home page.

As we know, organizations invest resources in projects to derive benefits and create value.  But those benefits don’t happen by themselves, they need to be managed. BS 202002:2023 is a new British standard on how to deliver the planned benefits of projects, programmes and portfolios to create value for the organization and its customers.

While the Standard is quite expensive to buy, all of the publications on the Mosaic ‘Value and Benefits’ page are free to download and use and cover:
– Value and Benefits Overview,
   – Defining project success,
– Benefits Management,
– Value Management and Value Engineering, and
– Useful External Web-links & Resources.  

See more at: https://mosaicprojects.com.au/PMKI-ORG-055.php  

Commercializing Agile

Agile in its various forms is becoming mainstream, and this means an increasing number of commercial contracts are being delivered by contractors who either choose, or are required, to use an agile methodology to create their contracted deliverables. While this is probably a good thing, this shift in approach can cause a number of problems. The major shift in approach is managing the legally imposed, contractual requirement to deliver 100% of the designated project deliverables on time.  The funds available to the contractor to do this work are defined by the contract price, and failure to deliver the contracted deliverables within the contracted timeframe can lead to significant cost penalties being applied[1].  

The requirement to deliver a project as promised in the agreed contract is business-as-usual for most construction and engineering project and is common across many other industries. While relatively rare software companies have also been successfully sued for breach of contract when their deliverables did not meet the contracted obligations, some early cases are discussed in Software sales hype and the law, and IT Business Sued for US$300 million+.  In short, choosing to use Agile as a project delivery methodology will not change the laws of contract, which means organizations using the agile methodology will need to become more commercial  and adapt their processes to include:

  1. Developing the realistic time and cost estimates needed to enter into a contract.
  2. Monitoring and controlling the project work to optimize the delivery of the contracted requirements within the contract timeframe.
  3. Enhancing their contract administration to deal with changes, variations, reporting, claims and other contractual requirements and issues.

This post is a start in looking for practical solutions to some of these challenges.

Contract Claim Administration

Two of the core tenets of Agile are welcoming change when it creates additional value for the client, and working with the client to discuss and resolve problems. While these are highly desirable attributes that should be welcomed in any contractual situation, what happens when the relationship breaks down, as it will on occasions?

The simple answer is that every contract is subject to law, and the ultimate solution to a dispute is a trial, after which a judge will decide the outcome based on applying the law to the evidence provided to the court. The process is impartial and focused on delivering justice, but justice is not synonymous with a fair and reasonable outcome.  To obtain a fair and reasonable outcome, evidence is needed that can prove, or disprove each of the propositions being put before the court.

The core elements in dispute in 90% of court cases relating to contract performance are about money and time. The contractor claims the client changed, or did, something (or things) that increased the time and cost of completing the work under the contract; the client denies this and counterclaims that the contractor was late in finishing because it failed to properly manage the work of the contract.    

The traditional approach to solving these areas of dispute was to obtain expert evidence as to the cost of each change and the time needed to implement each of the changes and its effect on the completion date. Determining the cost of a change is not particularly affected by the methodology used to deliver the work. The additional work involved in the change and its cost can be determined for a change in an ‘agile’ project in a similar way to most other projects. Where there are major issues is in assessing a reasonable delay.

For the last 50+ years, courts have been told by many hundreds of experts, the appropriate way to assess project delay is by using a critical path (CPM) schedule. Critical path theory is based on an assumption that to deliver a project successfully there is one best sequence of activities that have to be completed in a pre-defined way to achieve the best result. Consequently, this arrangement of the work can be modelled in a logic network and based on this model, the effect of any change can be assessed.

Agile approaches the work of a project from a completely different perspective. The approach assumes there is a ‘backlog’ of work to be accomplished, and the best people to decide what to do next are the project team when they are framing the next sprint or iteration. Ideally, the team making these decisions will have the active participation of a client representative, but this is not always the case. The best sequence of working emerges, it is not predetermined and therefore a CPM schedule cannot be developed before the work is started. 

Assessing and separating the delay caused by a change requested/approved by the client from delays and inefficiencies caused by the contractor is difficult at the best of times, this process becomes more difficult in projects using an agile approach to the work but is essential for assessing time related claims under a contract.

There are some control tools available in Agile, but diagrams such as a burndown (or burnup) chart are not able to show the effect of a client instructing the team to stop work on a particular feature for several weeks, or adding some new elements to the work. The instructions may have no effect, the team simply work on other things, or they may have a major effect. The problem is quantifying the effect to a standard that will be accepted as evidence in court proceedings.  CPM has major flaws, but it can be used to show a precise delay as a specific consequence of a change in the logic diagram. Nothing similar seems to have emerged in the Agile domain.

These challenges are discussed in WPM – Schedule Control in Agile and Distributed Projects (and are the focus of ongoing work).

The agile paradigm has a lot to offer, but to become a commercially effective option, the project controls and contractual frameworks will need a major overhaul.  For more on managing agile see: https://mosaicprojects.com.au/PMKI-ITC-040.php#Process1


[1] Developing and defending contractual claims is discussed in Forensic analysis and reporting (cost & time): https://mosaicprojects.com.au/PMKI-ITC-020.php#Process1

Critical Path Characteristics and Definitions

I’m wondering what is causing the confusion appearing in so many posts lately concerning the definition of the critical path. Is it:

  1. A lack of knowledge?
  2. People being out of date and using superseded definitions?
  3. People not understanding the difference between a characteristic and a definition?

As most people know (or should know) the definition used by the PMI Practice Standard for Scheduling (Third Edition), the International Standards Organization (ISO) and most other reputable authorities in their standards is similar to:

Critical Path: sequence of activities that determine the earliest possible completion date for the project or phase. 

For more on the development of this definition see: Defining the Critical Path.


To deal with the questions above, in reverse order:

The difference between a characteristic and a definition.

The definition of a phrase or concept (the ‘critical path’ is both) should be a short, concise, statement that is always correct. A characteristic is something that indicates the concept may be present.

Everyone of significance has always agreed the critical path is the sequence of activities determining the earliest possible completion of the project (or if the project has staged completions, a stage or phase).  This is the basis of the current valid definitions. As a direct consequence of this in a properly constructed CPM schedule, the float on the critical path is likely to be lower than on other paths but not always. Low float or zero float is a characteristic that is often seen on a critical path, but this is a consequence of its defining feature, it being longer than other paths. 

Superseded definitions.

In the 1960s and 70s, most CPM schedules were hand drawn and calculated using a day number calendar. This meant there was only one calendar and constraints were uncommon.  When there are no constraints and only a single calendar in use, the critical path has zero float! From the 1980s on, most CPM schedules have been developed using various software tool, all of which offer the user the option to impose date constraints and use multiple calendars (mainframe scheduling tools generally had these features from the 1960s on).

Using more than one calendar can cause different float values to occur within a single chain of activities, this is discussed in Calendars and the Critical Path.  

Date constraints can create positive or negative float (usually negative) depending on the imposed date compared to the calculated date and the type of constraint, this is discussed in Negative Float and the Critical Path.

Consequently for at least the last 40 years, the definition of a critical path cannot be based on float – float changes depending on other factors.

Knowledge?

One of the problems with frequently repeated fallacies is when people do a reference search, they find a viable answer, and then use that information assuming the information is correct. This is the way we learn, and is common across all disciplines.

Academic papers are built based on references, and despite peer review process, can reference false information and continue to spread the falsehood. One classic example of this is the number of books and papers that still claim Henry Gantt developed the bar chart despite the fact bar charts were in use 100 year before Gantt published his books (which make no claim to him having invented the concept), for more on this see: https://mosaicprojects.com.au/PMKI-ZSY-020.php#Barchart. Another common falsehood is Henry Gantt ‘invented project management’ – his work was focused on improving factory production processes: https://mosaicprojects.com.au/PMKI-ZSY-025.php#Overview

Academics are trained researchers, and still make mistakes; the rest of us have a bigger challenge! The spread of un-reviewed publications via the internet in the last 20+ years started the problem. Now Generative AI (Gen AI) and large language models (LLM) are exacerbating the problem. For most of us it is getting harder and harder to understand where the information being presented a person, or in an article originated. Gen AI is built to translate data into language, it has no ability to determine if the data it has found is from a credible source or not. And as more and more text is produced by the various Gen AI tools the more often wrong information will be repeated making it more likely the wrong information will be found and repeated again, and again.   

I’m not sure of the solution to this challenge Gen AI is clearly not skilled in project management practice (even the PMI AI tool), for more discussion on this important topic see: https://mosaicprojects.com.au/PMKI-SCH-033.php#AI-Discussion  

Reference

One reference that is reliable is Mosaic’s Easy CPM.  It incorporates most of what we know, focused on  developing and using an effective schedule in any software tool. The book is designed to provide practical guidance to people involved in developing, or using, schedules based on the Critical Path Method (CPM), and act as a reference and practice guide to enhance the effectiveness of their scheduling practice.

For more see: https://mosaicprojects.com.au/shop-easy-cpm.php 

LLM and Project Management – The Intelligence in AI is limited!

The best known LMM (Large Language Model) is ChatGPT. The developers acknowledge ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers. Fixing this issue is challenging, as during the LLM training there is no single source of truth (the app is biased towards what it has been taught). The ‘training source’ for ChatGPT is the internet which has 1000s of incorrect posts on various aspects of project management – GIGO applies.  

To overcome this problem a number of developments based on groomed training materials are in development, some proprietary in-house, and some to be publicly available. Two of these are:

Marvin, a collaboration between Projecting Success and Praxis, Mosaic is supporting this development – watch this space in 2024.

PMI’s Project Infinity, based on the vast library of publications and papers held by PMI.  This tool is still in Beta, but as a PMI member I’ve had access to test the model.

Project Infinity Test (1)

The ‘intelligence’ in LLM applications is the ability to understand and reply to language in a coherent way.  You ask a question, the app understands what you have asked, searches its learning, and formulates an answer you can understand. What seems to be missing is the ability to qualify the information in its systems, particularly when specific facts are required.

My test for Project Infinity was based on asking “What is the critical path?”.  The correct answer to this question based on the PMI Practice Standard for Scheduling (Third Edition) and ISO Standards is something similar to: 

Critical Path: sequence of activities that determine the earliest possible completion date for the project or phase.  See: Defining the Critical Path 

As anyone with even a limited knowledge of CPM scheduling knows, ‘Zero Float[1]’, or float in general cannot be used to define the critical path for the following reasons:

  1. Date constraints can cause changes in the float value, usually to create negative float, see: Negative Float and the Critical Path
  2. Multiple calendars can change float values along the critical path, see: Calendars and the Critical

A summary of my ‘chat’ with Project Infinity is:

Q1 – what is the critical path?

       ‘The critical path is the longest sequence of activities in a project plan which must be completed on time in order for the project to be completed on schedule. It is determined by identifying the activities that have zero slack or float time,….’

Q2 – Why are you claiming the critical path has zero float when it may have negative float?

        ‘Negative float can occur when an activity’s late finish date is earlier than its early finish date, which means that the activity is taking longer than planned and is delaying the project. However, the critical path always has zero float because any delay in the critical path activities will cause a delay in the overall project timeline. …….  In summary, while negative float can occur in non-critical path activities, the critical path always has zero float because any delay in these activities will cause a delay in the overall project timeline.’

Q3 – so what happens when the finish milestone and the project overall have a constrained finish?

        ‘When the finish milestone and the project overall have a constrained finish, total float may be used to provide an early indication of potential project completion slippage.’
[Note: I’m assuming ‘Total Float’ can be negative PI did not say so.]

Q4 – Why does your third answer contradict the first answer?

        ‘My third answer does not contradict the first answer. ……’

The full exchange can be seen at Project Infinity Test 1 – Critical Path

Conclusions

This simple test shows at the current level of development Project Infinity has a lot or learning to do:

  • It ignored key PMI documents, in particular the PMI Practice Standard for Scheduling (Third Edition)
     
  • It failed to recognize a direct contradiction in its answers.

Therefore, while LLM tools can help save time bringing together information from diverse sources, their training to date is limited to absorbing information from documents, the next stage of development involving qualifying and grading the data may be a way off. So if you do not know the right answer to a question, you cannot rely on an AI tool using LLM to provide you with a way out.  

To make matters worse, accountability in AI is a complex issue. We know AI systems can misstep in numerous ways, which raises questions about who is responsible? This is a complex legal issue and in the absence of someone else who is demonstrably at fault, you are likely to carry the can!

For more on AI in project management see:  https://mosaicprojects.com.au/PMKI-SCH-033.php#AI


[1] The concept of the critical path having zero total float arose in the 1960s when computer programs were relatively simple and most schedules were manually drawn and calculated. With a single ‘Day Number’ calendar and no constraints the longest path in a network had zero float. The introduction of computer programs in the 1980s that allowed multiple calendars and constraints invalidated this definition.