Monday

Tag Archives: Project Controls

WPM for Lean & Distributed Projects

The core concept underlaying the Critical Path Method (CPM) is there is one best way to undertake the work of the project and this can be accurately modelled in the CPM schedule. This premise does not hold for either distributed projects, or projects applying Lean Construction management. These two types of projects differ, lean is a management choice, whereas distributed projects are a physical fact:
–  Distributed projects are ones where the physical distribution of the elements to be constructed means significant amounts of the work can be done in any sequence and changing the sequence when needed is relatively easy.
–  Lean construction is a project delivery process that uses Lean methods to maximize stakeholder value and reduce waste by emphasizing collaboration between everyone involved in a project. To achieve this the work is planned and re-planned as needed by the project team focusing on optimising production.

In both cases, the flexibility in the way the detailed work is performed and the relative ease with which the sequence can be changed means CPM is ineffective as a predictor of status and completion.

Our latest article WPM for Lean & Distributed Projects looks at how Work Performance Management (WPM) can be used to assess both the current status and the projected completion for these types of project, regardless of the number of sequence changes made to the overall plan.

Download WPM for Lean & Distributed Projects from: https://mosaicprojects.com.au/PMKI-SCH-041.php#WPM-Dist

See more on WPM as a valuable tool to add to you project controls system: https://mosaicprojects.com.au/PMKI-SCH-041.php#Overview

White Constructions v PBS Holdings Revisited

White Constructions Pty Ltd v PBS Holdings Pty Ltd [2019] NSWSC 1166, involved a claim for delay and costs arising out of a contract to design a sewerage system for a subdivision and submit it for approval. The alleged breach was the failure to create and submit a sewer design acceptable to the approval authority which had the effect of delaying completion of the subdivision, giving rise to a claim for damages by White.

White and PBS both appointed experts to undertake a schedule analysis, and they did agree an ‘as-built’ program of the works but disagreed on almost everything else including the delay analysis method to use, the correct application of the methods, and the extent of the overall project delay caused by the delays in approving the sewer design.

The Judge found:

[Clause 18]      Plainly, both experts are adept at their art. But both cannot be right. It is not inevitable that one of them is right.
[Note: This approach is consistent with the UK court decision of Akenhead J in Walter Lilly & Company Ltd v Mckay [2012] EWHC 1773 (TCC) at [377], “the court is not compelled to choose only between the rival approaches and analyses of the experts. Ultimately it must be for the court to decide as a matter of fact what delayed the works and for how long”. This precedent has been followed on a number of occasions[1].]

[Clause 22]      The expert reports are complex. To the unschooled, they are impenetrable. It was apparent to me that I would need significant assistance to be put in a position to critically evaluate their opinions and conclusions.

[Clause 25]      Under UCPR r 31.54, the Court obtained the assistance of Mr Ian McIntyre (on whose appointment the parties agreed).

[Clause 137]   The major components of the works were:
       • earthworks,
       • roadworks and kerbing,
       • sewerage,
       • electrical and National Broadband Network (NBN) installation,
       • footpaths, and
       • landscaping.,

[Clause 138]   The electrical and NBN installation was contracted to and carried out by an organisation called Transelect. Landscaping was contracted to RK Evans Landscaping Pty Ltd. The as-built program is not in dispute.
[Note: the rest of the work was undertaken by other contractors]

[Clause 184]   White bears the onus of establishing that it suffered loss and the quantum of it.

[Clause 185]   White’s damages are based on delay to the whole project, said to be attributable to the late (underbore) sewer design. This is not the type of subject upon which precise evidence cannot be adduced. [Therefore] It is not a subject which involves the Court having to make an estimation or engage in some degree of guesswork.

[Clause 188]   The descriptions of the methods adopted by Shahady and Senogles respectively are evidently derived from the publication of the United Kingdom Society of Construction Law, the Delay and Disruption Protocol….

[Clause 191]   Mr McIntyre’s opinion, upon which I propose to act, is that for the purpose of any particular case, the fact that a method appears in the Protocol does not give it any standing, and the fact that a method, which is otherwise logical or rational, but does not appear in the Protocol, does not deny it standing.
[Note: this is the same wording as an express statement contained in the Delay and Disruption Protocol]

[Clause 195]   Mr McIntyre’s opinion, upon which I propose to act, is that neither method [used by the parties experts] is appropriate to be adopted in this case.

[Clause 196]   Mr McIntyre’s opinion, upon which I propose to act, is that close consideration and examination of the actual evidence of what was happening on the ground will reveal if the delay in approving the sewerage design actually played a role in delaying the project and, if so, how and by how much. In effect, he advised that the Court should apply the common law common sense approach to causation In effect, he advised that the Court should apply the common law common sense approach to causation referred to by the High Court in March v E & MH Stramare Pty Ltd (1991) 171 CLR 506.

[Clause 197]   The Court is concerned with common law notions of causation. The only appropriate method is to determine the matter by paying close attention to the facts, and assessing whether White has proved, on the probabilities, that delay in the underboring solution delayed the project as a whole and, if so, by how much.

[Clause 198]   This requires it to establish that:
• the whole project would have been completed by 15 July 2016,
• the final sewer approval delay delayed sewer works,
• the sewer works delay prevented non-sewer works from otherwise proceeding, that is, that the programme could not reasonably have been varied to accommodate the consequences of late approval, and
• other works could not have been done to fill downtimes so as to save time later.

[Clause 199]   ……… White has failed to discharge this burden.

Summary

The factors required to prove a delay outlined by the Judge at Clause 198 can be generalised as follows:

  1. The completion date for the project before the delay event occurred has to be known with some certainty.
  2. The delay event has to be shown to cause a delay which flowed through to extend the overall project completion date.
  3. There were not reasonable alternative ways of working that could mitigate the effect of the delay on project completion.

More significant, none of these steps needs a CPM schedule.  The project status and the effect of the disruption on project completion can be assessed based on its effect on the productivity of key resources. This is discussed in Assessing Delays in Agile & Distributed Projects: https://mosaicprojects.com.au/PDF_Papers/P215_Assessing_Delays_In_Agile_+_Distributed_Projects.pdf   


[1]     This approach by the courts is discussed in Delivering Expert Evidence is Becoming Harder: https://mosaicprojects.com.au/Mag_Articles/AA028_Delivering_Expert_Evidence.pdf

A Brief History of Agile

The history of agile software development is not what most people think, and is nothing like the story pushed by most Agile Evangelists.

Our latest publication A Brief History of Agile shows that from the beginning of large system software development the people managing the software engineering understood the need for prototyping and iterative and incremental development. This approach has always been part of the way good software is developed.

The environment the authors of the early papers referenced and linked in the article were operating in, satellite software and ‘cold-war’ control systems, plus the limitations of the computers they were working on, did require a focus on testing and documentation – it’s too late for a bug-fix once WW3 has started…..  But this is no different to modern day control systems development where people’s lives are at stake. Otherwise, nothing much has changed, good software is built incrementally, and tested progressively,  

The side-track into ‘waterfall’ seems to have bee started by people with a focus on requirements management and configuration management, both approached from a document heavy bureaucratic perspective. Add the desire of middle-management for the illusion of control and you get waterfall imposed on software developers by people who knew little about the development of large software systems. As predicted in 1970, ‘doing waterfall’ doubles to cost of software development. The fact waterfall survives in some organisations through to the present time is a factor of culture and the desire for control, even if it is an illusion.

The message from history, echoed in the Agile Manifesto, is you need to tailor the documentation, discipline, and control processes, to meet the requirements of the project. Developing a simple website with easy access to fix issues is very different to developing the control systems for a satellite that is intended to work for years, millions of miles from earth.

To read the full article and access many of the referenced papers and third-party analysis see: https://mosaicprojects.com.au/PMKI-ZSY-010.php#Agile

The evolution of the DCMA 14 Point Schedule Assessment

The DCMA 14-Point schedule assessment process was part of a suite of systems developed by the Defense Contract Management Agency (DCMA) starting in 2005. Its purpose was to standardize the assessment by DCMA staff of contractor developed Integrated Master Schedules.  

The DCMA is an agency of the United States federal government reporting to the Under Secretary of Defense for Acquisition and Sustainment, responsible for administering contracts for the Department of Defense (DoD), and other authorized federal agencies. It was created in February 1990 as the Defense Contract Management Command (DCMC) within the Defense Logistics Agency (DLA), then in March 2000, the DCMC was renamed as the Defense Contract Management Agency (DCMA) and made an independent agency.

The DCMA works directly with defense suppliers and contractors to oversee their time, cost, and technical performance, by monitoring the contractors’ performance and management systems to ensure that cost, product performance, and delivery schedules comply with the terms and conditions of the contracts.

The DOD had published specifications for an Integrated Master Schedule since 1991. In March 2005, the USA Under Secretary of Defense for Acquisition and Technology (USDA(AT&L)) issued a memo mandating the use of an Integrated Master Schedule (IMS) for contracts greater than $20 million and requiring the DCMA to establish guidelines and procedures to monitor and evaluate these schedules.

In response, the DCMA developed their 14-Point Assessment Checks as a protocol to be used for CPM schedule reviews made by their staff. The initial documentation describing this protocol appears to have consisted of an internal training course provided by the DCMA. The training was deployed as an on-line course in late 2007 and updated and re-issued on 21st November 2009 (none of these training materials appear to be available – there may have been other changes).

The final and current update to the 14 Point Assessment was included in Section 4 of the Earned Value Management System (EVMS) Program Analysis Pamphlet (PAP) DCMA-EA PAM 200.1 from October 2012.  This PAP appears to still be current as at the start of 2024 with DCMA training available to assessors. This version is the basis of our guide: DCMA 14-Point Assessment Metrics.

However, the usefulness of this approach to assessing schedule quality is questionable!

The objectives of the DCMA 14 Point assessment is frequently misunderstood, this was discussed in DCMA 14 Point Schedule Assessment – Updated.

The validity of some of the ‘checks’ are questionable, they do not conform to standards set by various scheduling authorities.

And, the relevance of the DCMA 14-Points is also questionable. The publication of the U.S. Government Accountability Office (GAO) Schedule Assessment Guide in December 2015 appears to provide a more realistic basis for assessment. 

Based on the above, one has to ask if this old baseline for assessing schedule quality still relevant? For more on schedule quality assessment see: https://mosaicprojects.com.au/PMKI-SCH-020.php#Overview

Agile’s Hidden Secret!

The two fundamental questions standard agile metrics cannot answer consistently are:

1.  How far ahead or behind schedule are we currently?

2.  When are we expected to finish?

Most of the tools and techniques used to manage Agile projects are good at defining the work (done, in-progress, or not started) and can indicate if the work is ahead or behind a nominated planned rate of production, but there is no direct calculation of the time the work is currently ahead or behind the required production rate, or what this is likely to mean for the completion of the project. A full discussion of this topic is in Calculating Completion.  However, most project sponsors and clients need to know when the project they are funding will actually finish, they have other people that need to make use of the project’s outputs to achieve their objectives. At present all Agile can offer is an educated assessment based on the project teams understanding of the work.

Work Performance Management (WPM) has been designed to solve this challenge by providing answers to these questions based on consistent, repeatable, and defensible calculations.

WPM is a simple, practical tool that uses project metrics that are already being used for other purposes within the project, to assess progress and calculate a predicted completion date by comparing the amount of work achieved at a point in time with the amount of work needed to have been achieved. Based on this data WPM calculates the project status and the expected completion date assuming the rate of progress remains constant.

Our latest article, WPM for Agile Projects identifies the cause of this information gap in Agile project management, explains the inability of current tools to accurately predict completion and demonstrates how WPM will effectively close this critical information gap.
Download WPM for Agile Projects: https://mosaicprojects.com.au/Mag_Articles/AA040_-_WPM_for_Agile_Projects.pdf

For more on the practical use of WPM, free sample files, and access to the tool see: https://mosaicprojects.com.au/PMKI-SCH-041.php  

Using WPM to augment CPM predictions

We all know (or should know) that when a project is running late, the predicted completion date calculated by the ‘critical path method’ (CPM) at an update tends to be optimistic, and this bias remains true for predictions based on simple time analysis as well as schedule calculations made using resource leveling.

There are two primary reasons for this:

  1. The assumption in CPM is that all future work will occur exactly as planned regardless of performance to date. The planned durations of future activities do not change.
  2. The burning of float has no effect of the calculated completion date until after the float is 100% consumed and the activity become critical.

For more on this issue see Why Critical Path Scheduling is Wildly Optimistic!

Having an optimistic schedule for the motivation of resources to perform in not all bad – the updated CPM schedule shows the minimum level of performance needed to stop the situation deteriorating. The problem is more senior managers also need a reliable prediction of when the project can realistically be expected to finish and CPM cannot provide this. A more realistic / pessimistic view is obtained by apply the principles of Work Performance Management (WPM) to a CPM schedule, using ‘activity days’ taken from the CPM schedule as the metric.

Our latest article, WPM Solves CPM Optimism, uses a simple CPM schedule to demonstrate the differences in the calculated project completion dates between CPM and WPM. The value of WPM is stripping away the optimism bias inherent in CPM scheduling (particularly early in the project), thereby providing management with a clear indication of where the project is likely to finish if work continues at the current levels of productivity. These predictions are not a statement of fact, change the productivity and you change the outcome! A similar approach can be used to assess projected completion dates based on a simple manual bar chart.

To download the article, and see more on augmenting CPM with WPM to enhance controls information: https://mosaicprojects.com.au/PMKI-SCH-041.php#WPM-CPM

Commercializing Agile

Agile in its various forms is becoming mainstream, and this means an increasing number of commercial contracts are being delivered by contractors who either choose, or are required, to use an agile methodology to create their contracted deliverables. While this is probably a good thing, this shift in approach can cause a number of problems. The major shift in approach is managing the legally imposed, contractual requirement to deliver 100% of the designated project deliverables on time.  The funds available to the contractor to do this work are defined by the contract price, and failure to deliver the contracted deliverables within the contracted timeframe can lead to significant cost penalties being applied[1].  

The requirement to deliver a project as promised in the agreed contract is business-as-usual for most construction and engineering project and is common across many other industries. While relatively rare software companies have also been successfully sued for breach of contract when their deliverables did not meet the contracted obligations, some early cases are discussed in Software sales hype and the law, and IT Business Sued for US$300 million+.  In short, choosing to use Agile as a project delivery methodology will not change the laws of contract, which means organizations using the agile methodology will need to become more commercial  and adapt their processes to include:

  1. Developing the realistic time and cost estimates needed to enter into a contract.
  2. Monitoring and controlling the project work to optimize the delivery of the contracted requirements within the contract timeframe.
  3. Enhancing their contract administration to deal with changes, variations, reporting, claims and other contractual requirements and issues.

This post is a start in looking for practical solutions to some of these challenges.

Contract Claim Administration

Two of the core tenets of Agile are welcoming change when it creates additional value for the client, and working with the client to discuss and resolve problems. While these are highly desirable attributes that should be welcomed in any contractual situation, what happens when the relationship breaks down, as it will on occasions?

The simple answer is that every contract is subject to law, and the ultimate solution to a dispute is a trial, after which a judge will decide the outcome based on applying the law to the evidence provided to the court. The process is impartial and focused on delivering justice, but justice is not synonymous with a fair and reasonable outcome.  To obtain a fair and reasonable outcome, evidence is needed that can prove, or disprove each of the propositions being put before the court.

The core elements in dispute in 90% of court cases relating to contract performance are about money and time. The contractor claims the client changed, or did, something (or things) that increased the time and cost of completing the work under the contract; the client denies this and counterclaims that the contractor was late in finishing because it failed to properly manage the work of the contract.    

The traditional approach to solving these areas of dispute was to obtain expert evidence as to the cost of each change and the time needed to implement each of the changes and its effect on the completion date. Determining the cost of a change is not particularly affected by the methodology used to deliver the work. The additional work involved in the change and its cost can be determined for a change in an ‘agile’ project in a similar way to most other projects. Where there are major issues is in assessing a reasonable delay.

For the last 50+ years, courts have been told by many hundreds of experts, the appropriate way to assess project delay is by using a critical path (CPM) schedule. Critical path theory is based on an assumption that to deliver a project successfully there is one best sequence of activities that have to be completed in a pre-defined way to achieve the best result. Consequently, this arrangement of the work can be modelled in a logic network and based on this model, the effect of any change can be assessed.

Agile approaches the work of a project from a completely different perspective. The approach assumes there is a ‘backlog’ of work to be accomplished, and the best people to decide what to do next are the project team when they are framing the next sprint or iteration. Ideally, the team making these decisions will have the active participation of a client representative, but this is not always the case. The best sequence of working emerges, it is not predetermined and therefore a CPM schedule cannot be developed before the work is started. 

Assessing and separating the delay caused by a change requested/approved by the client from delays and inefficiencies caused by the contractor is difficult at the best of times, this process becomes more difficult in projects using an agile approach to the work but is essential for assessing time related claims under a contract.

There are some control tools available in Agile, but diagrams such as a burndown (or burnup) chart are not able to show the effect of a client instructing the team to stop work on a particular feature for several weeks, or adding some new elements to the work. The instructions may have no effect, the team simply work on other things, or they may have a major effect. The problem is quantifying the effect to a standard that will be accepted as evidence in court proceedings.  CPM has major flaws, but it can be used to show a precise delay as a specific consequence of a change in the logic diagram. Nothing similar seems to have emerged in the Agile domain.

These challenges are discussed in WPM – Schedule Control in Agile and Distributed Projects (and are the focus of ongoing work).

The agile paradigm has a lot to offer, but to become a commercially effective option, the project controls and contractual frameworks will need a major overhaul.  For more on managing agile see: https://mosaicprojects.com.au/PMKI-ITC-040.php#Process1


[1] Developing and defending contractual claims is discussed in Forensic analysis and reporting (cost & time): https://mosaicprojects.com.au/PMKI-ITC-020.php#Process1

Critical Path Characteristics and Definitions

I’m wondering what is causing the confusion appearing in so many posts lately concerning the definition of the critical path. Is it:

  1. A lack of knowledge?
  2. People being out of date and using superseded definitions?
  3. People not understanding the difference between a characteristic and a definition?

As most people know (or should know) the definition used by the PMI Practice Standard for Scheduling (Third Edition), the International Standards Organization (ISO) and most other reputable authorities in their standards is similar to:

Critical Path: sequence of activities that determine the earliest possible completion date for the project or phase. 

For more on the development of this definition see: Defining the Critical Path.


To deal with the questions above, in reverse order:

The difference between a characteristic and a definition.

The definition of a phrase or concept (the ‘critical path’ is both) should be a short, concise, statement that is always correct. A characteristic is something that indicates the concept may be present.

Everyone of significance has always agreed the critical path is the sequence of activities determining the earliest possible completion of the project (or if the project has staged completions, a stage or phase).  This is the basis of the current valid definitions. As a direct consequence of this in a properly constructed CPM schedule, the float on the critical path is likely to be lower than on other paths but not always. Low float or zero float is a characteristic that is often seen on a critical path, but this is a consequence of its defining feature, it being longer than other paths. 

Superseded definitions.

In the 1960s and 70s, most CPM schedules were hand drawn and calculated using a day number calendar. This meant there was only one calendar and constraints were uncommon.  When there are no constraints and only a single calendar in use, the critical path has zero float! From the 1980s on, most CPM schedules have been developed using various software tool, all of which offer the user the option to impose date constraints and use multiple calendars (mainframe scheduling tools generally had these features from the 1960s on).

Using more than one calendar can cause different float values to occur within a single chain of activities, this is discussed in Calendars and the Critical Path.  

Date constraints can create positive or negative float (usually negative) depending on the imposed date compared to the calculated date and the type of constraint, this is discussed in Negative Float and the Critical Path.

Consequently for at least the last 40 years, the definition of a critical path cannot be based on float – float changes depending on other factors.

Knowledge?

One of the problems with frequently repeated fallacies is when people do a reference search, they find a viable answer, and then use that information assuming the information is correct. This is the way we learn, and is common across all disciplines.

Academic papers are built based on references, and despite peer review process, can reference false information and continue to spread the falsehood. One classic example of this is the number of books and papers that still claim Henry Gantt developed the bar chart despite the fact bar charts were in use 100 year before Gantt published his books (which make no claim to him having invented the concept), for more on this see: https://mosaicprojects.com.au/PMKI-ZSY-020.php#Barchart. Another common falsehood is Henry Gantt ‘invented project management’ – his work was focused on improving factory production processes: https://mosaicprojects.com.au/PMKI-ZSY-025.php#Overview

Academics are trained researchers, and still make mistakes; the rest of us have a bigger challenge! The spread of un-reviewed publications via the internet in the last 20+ years started the problem. Now Generative AI (Gen AI) and large language models (LLM) are exacerbating the problem. For most of us it is getting harder and harder to understand where the information being presented a person, or in an article originated. Gen AI is built to translate data into language, it has no ability to determine if the data it has found is from a credible source or not. And as more and more text is produced by the various Gen AI tools the more often wrong information will be repeated making it more likely the wrong information will be found and repeated again, and again.   

I’m not sure of the solution to this challenge Gen AI is clearly not skilled in project management practice (even the PMI AI tool), for more discussion on this important topic see: https://mosaicprojects.com.au/PMKI-SCH-033.php#AI-Discussion  

Reference

One reference that is reliable is Mosaic’s Easy CPM.  It incorporates most of what we know, focused on  developing and using an effective schedule in any software tool. The book is designed to provide practical guidance to people involved in developing, or using, schedules based on the Critical Path Method (CPM), and act as a reference and practice guide to enhance the effectiveness of their scheduling practice.

For more see: https://mosaicprojects.com.au/shop-easy-cpm.php 

LLM and Project Management – The Intelligence in AI is limited!

The best known LMM (Large Language Model) is ChatGPT. The developers acknowledge ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers. Fixing this issue is challenging, as during the LLM training there is no single source of truth (the app is biased towards what it has been taught). The ‘training source’ for ChatGPT is the internet which has 1000s of incorrect posts on various aspects of project management – GIGO applies.  

To overcome this problem a number of developments based on groomed training materials are in development, some proprietary in-house, and some to be publicly available. Two of these are:

Marvin, a collaboration between Projecting Success and Praxis, Mosaic is supporting this development – watch this space in 2024.

PMI’s Project Infinity, based on the vast library of publications and papers held by PMI.  This tool is still in Beta, but as a PMI member I’ve had access to test the model.

Project Infinity Test (1)

The ‘intelligence’ in LLM applications is the ability to understand and reply to language in a coherent way.  You ask a question, the app understands what you have asked, searches its learning, and formulates an answer you can understand. What seems to be missing is the ability to qualify the information in its systems, particularly when specific facts are required.

My test for Project Infinity was based on asking “What is the critical path?”.  The correct answer to this question based on the PMI Practice Standard for Scheduling (Third Edition) and ISO Standards is something similar to: 

Critical Path: sequence of activities that determine the earliest possible completion date for the project or phase.  See: Defining the Critical Path 

As anyone with even a limited knowledge of CPM scheduling knows, ‘Zero Float[1]’, or float in general cannot be used to define the critical path for the following reasons:

  1. Date constraints can cause changes in the float value, usually to create negative float, see: Negative Float and the Critical Path
  2. Multiple calendars can change float values along the critical path, see: Calendars and the Critical

A summary of my ‘chat’ with Project Infinity is:

Q1 – what is the critical path?

       ‘The critical path is the longest sequence of activities in a project plan which must be completed on time in order for the project to be completed on schedule. It is determined by identifying the activities that have zero slack or float time,….’

Q2 – Why are you claiming the critical path has zero float when it may have negative float?

        ‘Negative float can occur when an activity’s late finish date is earlier than its early finish date, which means that the activity is taking longer than planned and is delaying the project. However, the critical path always has zero float because any delay in the critical path activities will cause a delay in the overall project timeline. …….  In summary, while negative float can occur in non-critical path activities, the critical path always has zero float because any delay in these activities will cause a delay in the overall project timeline.’

Q3 – so what happens when the finish milestone and the project overall have a constrained finish?

        ‘When the finish milestone and the project overall have a constrained finish, total float may be used to provide an early indication of potential project completion slippage.’
[Note: I’m assuming ‘Total Float’ can be negative PI did not say so.]

Q4 – Why does your third answer contradict the first answer?

        ‘My third answer does not contradict the first answer. ……’

The full exchange can be seen at Project Infinity Test 1 – Critical Path

Conclusions

This simple test shows at the current level of development Project Infinity has a lot or learning to do:

  • It ignored key PMI documents, in particular the PMI Practice Standard for Scheduling (Third Edition)
     
  • It failed to recognize a direct contradiction in its answers.

Therefore, while LLM tools can help save time bringing together information from diverse sources, their training to date is limited to absorbing information from documents, the next stage of development involving qualifying and grading the data may be a way off. So if you do not know the right answer to a question, you cannot rely on an AI tool using LLM to provide you with a way out.  

To make matters worse, accountability in AI is a complex issue. We know AI systems can misstep in numerous ways, which raises questions about who is responsible? This is a complex legal issue and in the absence of someone else who is demonstrably at fault, you are likely to carry the can!

For more on AI in project management see:  https://mosaicprojects.com.au/PMKI-SCH-033.php#AI


[1] The concept of the critical path having zero total float arose in the 1960s when computer programs were relatively simple and most schedules were manually drawn and calculated. With a single ‘Day Number’ calendar and no constraints the longest path in a network had zero float. The introduction of computer programs in the 1980s that allowed multiple calendars and constraints invalidated this definition.

The evolution of AI

In our previous blog, AI is coming to a project near you!, we identified a large number of project management software applications using Artificial Intelligence (AI) and the rapid spread of the capability. But what exactly is AI? This post offers a brief overview of the concept.   

AI is not as new as some people imagine. Some of the mathematics underpinning AI can be traced back to the 18th century and many of the fundamental concepts were developed in the 20th, but there was very limited use of AI. The ability to make widespread practical use of AI required the development of computers with sufficient processing capabilities to process large amounts of data quickly.  Each of the developments outlined below were enabled by better processors and increased data storage capabilities.

Types of AI

The modern concept of intelligent processing is more than 50 years old, but the way a computer application works depends on the design of the application.  Very broadly:

Decision tables have been in software since the 1960s, the decision table applies a cascading set of decisions to a limited set of data to arrive at a result.  The ‘table’ is hard-wired into the code and does not change. Many resource levelling algorithms are based on decision tables to decide what resources get allocated to which activities on each day.

Expert systems, known as rule-based systems in the 1960s, use explicitly crafted rules to generate responses or data sets based on an initial question. These systems were the basis of many automated Chatbots and help systems. The system’s rules are ‘hard-wired’ and do not change without external intervention.

Data mining was developed in the 1990s. The application uses generalized instructions to look at large volumes of data to discover previously unknown properties. Generalized processes means the data being examined does not need to be labelled or predefined. The application works out a suitable structure, and then draws classes information from within the data set. These system can be interactive, but are not self-learning. Extracting knowledge from ‘big data’ supports Business Intelligence and other business and marketing improvement initiatives.

Machine learning (ML). Basic ML is similar to data mining. It concerned with the development and study of statistical algorithms that can effectively generalize to perform tasks without explicit instructions. ML focuses on prediction and recognition, based on properties the application has learned from training data. The basic functions of ML were defined in the early 2000s and the concept continues to evolve and develop. The basic ML approach can be seen in a range of project management tools where the application recommends durations, lists likely risks, or performs other predictive assessments on your current project, based on data from previous projects.

Algorithmic Decision Making, a subset of ‘expert systems’ focused on using conventional machine learning and statistical techniques such as ordinary least squares, logistic regression, and decision trees to automate traditional human-based decision-making processes. Research suggests, well designed algorithms are less biased and more accurate than the humans they are replacing, improving predictions and decisions, but care is needed; they can also perpetuate blind spots, biases and be built to be fundamentally unfair.

Generative AI (Gen AI) extends the capability of ML. Gen AI uses generative artificial neural networks and ‘deep learning’ to deliver enhanced performance. Gen AI has been applied to large language models (LLM), computer vision, robotics, speech recognition, email filtering, agriculture, medicine, and many other fields. Each branch of development takes the basic principles of Gen AI and adapts them to the specific needs of the researchers. Latest trends are linking different strands of Gen AI to create new things such as generating pictures from verbal descriptions, and linking Gen AI to the IoT (Internet of Things) and additive manufacturing functions to produce computer designed ‘stuff’.

Large language models (LLM) are the branch of generative AI with most direct relevance to project management. LLM uses deep learning algorithms that can perform a variety of natural language processing (NLP) tasks. They are trained using massive datasets which enables them to recognize, translate, predict, or generate text or other content. LLM applications must be pre-trained and then fine-tuned so that they can solve text classifications, answer questions, summarize documents, and generate text, sound, or images. The challenge with LLM is in the training materials, the system only knows what it has been taught. This branch of Gen AI burst into prominence with the development of ChatGPT. Its developer, OpenAI (a research company), launched ChatGPT on November 30, 2022 – a year later and ChatGPT has world-wide attention.

LLM underpins most of todays advanced AI applications and can generate content across multiple types of media, including text, graphics, and video. While early implementations have had issues with accuracy and bias the tools are improving rapidly.

Progress to date indicates that the inherent capabilities of Gen AI will fundamentally change enterprise technology, and how businesses operate. Rather than simply requiring technical skills, employees will also need critical thinking, people skills, and a good understanding of ethics and governance processes.

In the four months from January to April 2023 the percentage of employees in Australia using Gen AI grew from 10% to 35%[1].  This rapid growth in use raises concerns around safety, privacy and security but businesses that do not explore the use of Gen AI in their organisations or industry risk being left behind.

The technological world is becoming very closely integrated:

Source: ACS Australia’s Digital Pulse | A new approach to building Australia’s technology skills – click to expand.

For more on the application of AI to project management software see: https://mosaicprojects.com.au/PMKI-SCH-033.php#AI

Our next post will look at the use of LLM in project management.


[1] Australian Computer Society research 2023. Australia’s Digital Pulse | A new approach to building Australia’s technology skills