The Problem with Waterfall

The term ‘waterfall’ is seen in lots of different posts without any clear definition of what the writers of those posts mean by the term.  The only constant seems to be in each of the writer’s view ‘waterfall’ is not Agile, and generally represents bad project management practice. In summary, the agile advocates view seems to be:

Agile: A well-defined flexible project delivery process, based on the Agile Manifesto, applicable to software development and a wide range of other “soft projects” such as business change. Agile = Good!

Waterfall: Any project delivery process that is not Agile. Waterfall = Bad!

There are many problems with this simplistic viewpoint starting with the fact the concept of ‘waterfall’ had a very short life and with the possible exception of a few, very traditional, software development organizations, no one uses waterfall for anything.

History of Waterfall.

To the best of my knowledge, the first publication to use the term Waterfall was in the 1976 paper Software Requirements: Are They Really a Problem, by T.E. Bell and T.A. Thayer. This paper misrepresented the 1970 paper Managing the development of large software systems, by Dr Winston Royce[1]. Royce proposed an iterative approach to the development of large systems, but Bell and Thayer falsely claimed he supported ‘waterfall’[2].  

Summary diagram from Royce 1970.

The real start of Waterfall was the publication in 1988 of DOD-STD-2167A by the US Department of Defense, which established uniform requirements for the development of software based on the Waterfall approach[3].   

Extract from DOD-STD-2167A

Problems with the Waterfall approach were quickly identified and in 1996 MIL-STD-498 was released by the US Department of Defense to correct the problems. Officially Waterfall was dead and buried but many companies had adopted waterfall and because waterfall projects were slow and subject to delay, hourly paid staff and contractors had a powerful incentive not to change despite many better software development processes being developed starting from the early 1980s.   

Other types of projects and project delivery.

Waterfall was a short-lived software development methodology. The vast majority of projects in the construction, engineering, oil & gas, defence, and aerospace industries use project delivery methods based on the approaches described in A Guide to the Project Management Body of Knowledge (PMBOK® Guide)—Sixth Edition, and a range of other standards. These other projects generally have three phases:

  1. definition phase undertaken by the client organization to define the capabilities of the product being developed
  2. procurement phase where the client selects a delivery agent for the development of the product
  3. delivery phase where the delivery agent builds and delivers the product

The design of the product (ship, building, rocket, etc.) may be undertaken in full or in part during any one of the three phases. A minimum level of design is required to initiate procurement, but for simple buildings and civil engineering projects, it is not unusual for a complete design and specification to be provided by the client.

The procurement phase may be a simple pricing exercise, or a complex, and phased, design process (sometimes even involving the production of working prototypes), with selection being based on the capabilities of the design produced by the successful tenderer.

Then, in many projects, a significant amount of detailed design is still required during the delivery phase, including shop drawings produced by subcontractors and suppliers.

Similarly, the procurement arrangements vary widely. The client may choose to enter into some form of alliance or partnership with the preferred delivery agent based on shared risk and profits, or the client may choose a hard-dollar contract based on a fixed price to deliver a fixed scope, or some other form of contractual arrangement.

The only certainties are that the typical project approaches used for the vast majority of ‘other’ projects bear no resemblance to the waterfall approach, and this ‘other’ classification includes more than two-thirds of the world’s projects by value.

Conclusions

  1. I suggest it is time to follow the US DOD lead from 1994 and bury the concept of ‘waterfall’ – using the name 30 years after it was officially dropped is long enough.
  2. People involved in the ‘Agile’ industry need to wake up to the fact that software development is only one of many types of project. Most of the ‘other’ types of project do not use Agile, and they certainly don’t use waterfall.
  3. Agile and agility are not synonymous – all organisations benefit from a degree of agility, but this has nothing to do with selecting the best project delivery methodology (more on this later).
  4. In the 21st century, Waterfall is not synonymous with over documentation and/or bad project management. There is plenty of bad project management practice around. But bad management needs to be called out for what it is – 99.999% of the time the bad managers are not trying to use waterfall in their work.   

Ditching the concept of waterfall does create a couple of challenges – we all have an understanding what Agile means as a project delivery process, we need similar generally accepted classifications for other types of project delivery – more on this later. Similarly, the bad management practices branded as ‘waterfall’ need to be identified and understood, you cannot improve a bad process until the root cause of the problem is understood.

For more on Agile management see: https://mosaicprojects.com.au/PMKI-ITC-040.php#Process1

Note: THE MYTH OF THE ‘WATERFALL’ SDLC by expands on this post in far greater detail and is highly recommended as a reference: http://www.bawiki.com/wiki/Waterfall.html


[1] Download a copy of the 1970 Royce paper: https://mosaicprojects.com.au/PDF-Gen/Royce_-_Managing_the_development_of_large_software_systems.pdf  See Fig. 10.

[2] Download a copy of the 1976 Bell & Thayer paper: https://mosaicprojects.com.au/PDF-Gen/software_requirements_are_they_really_a_problem.pdf

[3] Download DOD-STD-2167A Defense System Software Development (1988): https://mosaicprojects.com.au/PDF-Gen/DOD-STD-2167A.pdf

Using WPM to augment CPM predictions

We all know (or should know) that when a project is running late, the predicted completion date calculated by the ‘critical path method’ (CPM) at an update tends to be optimistic, and this bias remains true for predictions based on simple time analysis as well as schedule calculations made using resource leveling.

There are two primary reasons for this:

  1. The assumption in CPM is that all future work will occur exactly as planned regardless of performance to date. The planned durations of future activities do not change.
  2. The burning of float has no effect of the calculated completion date until after the float is 100% consumed and the activity become critical.

For more on this issue see Why Critical Path Scheduling is Wildly Optimistic!

Having an optimistic schedule for the motivation of resources to perform in not all bad – the updated CPM schedule shows the minimum level of performance needed to stop the situation deteriorating. The problem is more senior managers also need a reliable prediction of when the project can realistically be expected to finish and CPM cannot provide this. A more realistic / pessimistic view is obtained by apply the principles of Work Performance Management (WPM) to a CPM schedule, using ‘activity days’ taken from the CPM schedule as the metric.

Our latest article, WPM Solves CPM Optimism, uses a simple CPM schedule to demonstrate the differences in the calculated project completion dates between CPM and WPM. The value of WPM is stripping away the optimism bias inherent in CPM scheduling (particularly early in the project), thereby providing management with a clear indication of where the project is likely to finish if work continues at the current levels of productivity. These predictions are not a statement of fact, change the productivity and you change the outcome! A similar approach can be used to assess projected completion dates based on a simple manual bar chart.

To download the article, and see more on augmenting CPM with WPM to enhance controls information: https://mosaicprojects.com.au/PMKI-SCH-041.php#WPM-CPM

Benefits Management

The publication of BS 202002:2023 – Applying benefits management on portfolios, programmes and projects, last year has prompted an update to our Value and Benefits Realization page, including a link to the new Standard’s home page.

As we know, organizations invest resources in projects to derive benefits and create value.  But those benefits don’t happen by themselves, they need to be managed. BS 202002:2023 is a new British standard on how to deliver the planned benefits of projects, programmes and portfolios to create value for the organization and its customers.

While the Standard is quite expensive to buy, all of the publications on the Mosaic ‘Value and Benefits’ page are free to download and use and cover:
– Value and Benefits Overview,
   – Defining project success,
– Benefits Management,
– Value Management and Value Engineering, and
– Useful External Web-links & Resources.  

See more at: https://mosaicprojects.com.au/PMKI-ORG-055.php  

Commercializing Agile

Agile in its various forms is becoming mainstream, and this means an increasing number of commercial contracts are being delivered by contractors who either choose, or are required, to use an agile methodology to create their contracted deliverables. While this is probably a good thing, this shift in approach can cause a number of problems. The major shift in approach is managing the legally imposed, contractual requirement to deliver 100% of the designated project deliverables on time.  The funds available to the contractor to do this work are defined by the contract price, and failure to deliver the contracted deliverables within the contracted timeframe can lead to significant cost penalties being applied[1].  

The requirement to deliver a project as promised in the agreed contract is business-as-usual for most construction and engineering project and is common across many other industries. While relatively rare software companies have also been successfully sued for breach of contract when their deliverables did not meet the contracted obligations, some early cases are discussed in Software sales hype and the law, and IT Business Sued for US$300 million+.  In short, choosing to use Agile as a project delivery methodology will not change the laws of contract, which means organizations using the agile methodology will need to become more commercial  and adapt their processes to include:

  1. Developing the realistic time and cost estimates needed to enter into a contract.
  2. Monitoring and controlling the project work to optimize the delivery of the contracted requirements within the contract timeframe.
  3. Enhancing their contract administration to deal with changes, variations, reporting, claims and other contractual requirements and issues.

This post is a start in looking for practical solutions to some of these challenges.

Contract Claim Administration

Two of the core tenets of Agile are welcoming change when it creates additional value for the client, and working with the client to discuss and resolve problems. While these are highly desirable attributes that should be welcomed in any contractual situation, what happens when the relationship breaks down, as it will on occasions?

The simple answer is that every contract is subject to law, and the ultimate solution to a dispute is a trial, after which a judge will decide the outcome based on applying the law to the evidence provided to the court. The process is impartial and focused on delivering justice, but justice is not synonymous with a fair and reasonable outcome.  To obtain a fair and reasonable outcome, evidence is needed that can prove, or disprove each of the propositions being put before the court.

The core elements in dispute in 90% of court cases relating to contract performance are about money and time. The contractor claims the client changed, or did, something (or things) that increased the time and cost of completing the work under the contract; the client denies this and counterclaims that the contractor was late in finishing because it failed to properly manage the work of the contract.    

The traditional approach to solving these areas of dispute was to obtain expert evidence as to the cost of each change and the time needed to implement each of the changes and its effect on the completion date. Determining the cost of a change is not particularly affected by the methodology used to deliver the work. The additional work involved in the change and its cost can be determined for a change in an ‘agile’ project in a similar way to most other projects. Where there are major issues is in assessing a reasonable delay.

For the last 50+ years, courts have been told by many hundreds of experts, the appropriate way to assess project delay is by using a critical path (CPM) schedule. Critical path theory is based on an assumption that to deliver a project successfully there is one best sequence of activities that have to be completed in a pre-defined way to achieve the best result. Consequently, this arrangement of the work can be modelled in a logic network and based on this model, the effect of any change can be assessed.

Agile approaches the work of a project from a completely different perspective. The approach assumes there is a ‘backlog’ of work to be accomplished, and the best people to decide what to do next are the project team when they are framing the next sprint or iteration. Ideally, the team making these decisions will have the active participation of a client representative, but this is not always the case. The best sequence of working emerges, it is not predetermined and therefore a CPM schedule cannot be developed before the work is started. 

Assessing and separating the delay caused by a change requested/approved by the client from delays and inefficiencies caused by the contractor is difficult at the best of times, this process becomes more difficult in projects using an agile approach to the work but is essential for assessing time related claims under a contract.

There are some control tools available in Agile, but diagrams such as a burndown (or burnup) chart are not able to show the effect of a client instructing the team to stop work on a particular feature for several weeks, or adding some new elements to the work. The instructions may have no effect, the team simply work on other things, or they may have a major effect. The problem is quantifying the effect to a standard that will be accepted as evidence in court proceedings.  CPM has major flaws, but it can be used to show a precise delay as a specific consequence of a change in the logic diagram. Nothing similar seems to have emerged in the Agile domain.

These challenges are discussed in WPM – Schedule Control in Agile and Distributed Projects (and are the focus of ongoing work).

The agile paradigm has a lot to offer, but to become a commercially effective option, the project controls and contractual frameworks will need a major overhaul.  For more on managing agile see: https://mosaicprojects.com.au/PMKI-ITC-040.php#Process1


[1] Developing and defending contractual claims is discussed in Forensic analysis and reporting (cost & time): https://mosaicprojects.com.au/PMKI-ITC-020.php#Process1

Critical Path Characteristics and Definitions

I’m wondering what is causing the confusion appearing in so many posts lately concerning the definition of the critical path. Is it:

  1. A lack of knowledge?
  2. People being out of date and using superseded definitions?
  3. People not understanding the difference between a characteristic and a definition?

As most people know (or should know) the definition used by the PMI Practice Standard for Scheduling (Third Edition), the International Standards Organization (ISO) and most other reputable authorities in their standards is similar to:

Critical Path: sequence of activities that determine the earliest possible completion date for the project or phase. 

For more on the development of this definition see: Defining the Critical Path.


To deal with the questions above, in reverse order:

The difference between a characteristic and a definition.

The definition of a phrase or concept (the ‘critical path’ is both) should be a short, concise, statement that is always correct. A characteristic is something that indicates the concept may be present.

Everyone of significance has always agreed the critical path is the sequence of activities determining the earliest possible completion of the project (or if the project has staged completions, a stage or phase).  This is the basis of the current valid definitions. As a direct consequence of this in a properly constructed CPM schedule, the float on the critical path is likely to be lower than on other paths but not always. Low float or zero float is a characteristic that is often seen on a critical path, but this is a consequence of its defining feature, it being longer than other paths. 

Superseded definitions.

In the 1960s and 70s, most CPM schedules were hand drawn and calculated using a day number calendar. This meant there was only one calendar and constraints were uncommon.  When there are no constraints and only a single calendar in use, the critical path has zero float! From the 1980s on, most CPM schedules have been developed using various software tool, all of which offer the user the option to impose date constraints and use multiple calendars (mainframe scheduling tools generally had these features from the 1960s on).

Using more than one calendar can cause different float values to occur within a single chain of activities, this is discussed in Calendars and the Critical Path.  

Date constraints can create positive or negative float (usually negative) depending on the imposed date compared to the calculated date and the type of constraint, this is discussed in Negative Float and the Critical Path.

Consequently for at least the last 40 years, the definition of a critical path cannot be based on float – float changes depending on other factors.

Knowledge?

One of the problems with frequently repeated fallacies is when people do a reference search, they find a viable answer, and then use that information assuming the information is correct. This is the way we learn, and is common across all disciplines.

Academic papers are built based on references, and despite peer review process, can reference false information and continue to spread the falsehood. One classic example of this is the number of books and papers that still claim Henry Gantt developed the bar chart despite the fact bar charts were in use 100 year before Gantt published his books (which make no claim to him having invented the concept), for more on this see: https://mosaicprojects.com.au/PMKI-ZSY-020.php#Barchart. Another common falsehood is Henry Gantt ‘invented project management’ – his work was focused on improving factory production processes: https://mosaicprojects.com.au/PMKI-ZSY-025.php#Overview

Academics are trained researchers, and still make mistakes; the rest of us have a bigger challenge! The spread of un-reviewed publications via the internet in the last 20+ years started the problem. Now Generative AI (Gen AI) and large language models (LLM) are exacerbating the problem. For most of us it is getting harder and harder to understand where the information being presented a person, or in an article originated. Gen AI is built to translate data into language, it has no ability to determine if the data it has found is from a credible source or not. And as more and more text is produced by the various Gen AI tools the more often wrong information will be repeated making it more likely the wrong information will be found and repeated again, and again.   

I’m not sure of the solution to this challenge Gen AI is clearly not skilled in project management practice (even the PMI AI tool), for more discussion on this important topic see: https://mosaicprojects.com.au/PMKI-SCH-033.php#AI-Discussion  

Reference

One reference that is reliable is Mosaic’s Easy CPM.  It incorporates most of what we know, focused on  developing and using an effective schedule in any software tool. The book is designed to provide practical guidance to people involved in developing, or using, schedules based on the Critical Path Method (CPM), and act as a reference and practice guide to enhance the effectiveness of their scheduling practice.

For more see: https://mosaicprojects.com.au/shop-easy-cpm.php 

LLM and Project Management – The Intelligence in AI is limited!

The best known LMM (Large Language Model) is ChatGPT. The developers acknowledge ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers. Fixing this issue is challenging, as during the LLM training there is no single source of truth (the app is biased towards what it has been taught). The ‘training source’ for ChatGPT is the internet which has 1000s of incorrect posts on various aspects of project management – GIGO applies.  

To overcome this problem a number of developments based on groomed training materials are in development, some proprietary in-house, and some to be publicly available. Two of these are:

Marvin, a collaboration between Projecting Success and Praxis, Mosaic is supporting this development – watch this space in 2024.

PMI’s Project Infinity, based on the vast library of publications and papers held by PMI.  This tool is still in Beta, but as a PMI member I’ve had access to test the model.

Project Infinity Test (1)

The ‘intelligence’ in LLM applications is the ability to understand and reply to language in a coherent way.  You ask a question, the app understands what you have asked, searches its learning, and formulates an answer you can understand. What seems to be missing is the ability to qualify the information in its systems, particularly when specific facts are required.

My test for Project Infinity was based on asking “What is the critical path?”.  The correct answer to this question based on the PMI Practice Standard for Scheduling (Third Edition) and ISO Standards is something similar to: 

Critical Path: sequence of activities that determine the earliest possible completion date for the project or phase.  See: Defining the Critical Path 

As anyone with even a limited knowledge of CPM scheduling knows, ‘Zero Float[1]’, or float in general cannot be used to define the critical path for the following reasons:

  1. Date constraints can cause changes in the float value, usually to create negative float, see: Negative Float and the Critical Path
  2. Multiple calendars can change float values along the critical path, see: Calendars and the Critical

A summary of my ‘chat’ with Project Infinity is:

Q1 – what is the critical path?

       ‘The critical path is the longest sequence of activities in a project plan which must be completed on time in order for the project to be completed on schedule. It is determined by identifying the activities that have zero slack or float time,….’

Q2 – Why are you claiming the critical path has zero float when it may have negative float?

        ‘Negative float can occur when an activity’s late finish date is earlier than its early finish date, which means that the activity is taking longer than planned and is delaying the project. However, the critical path always has zero float because any delay in the critical path activities will cause a delay in the overall project timeline. …….  In summary, while negative float can occur in non-critical path activities, the critical path always has zero float because any delay in these activities will cause a delay in the overall project timeline.’

Q3 – so what happens when the finish milestone and the project overall have a constrained finish?

        ‘When the finish milestone and the project overall have a constrained finish, total float may be used to provide an early indication of potential project completion slippage.’
[Note: I’m assuming ‘Total Float’ can be negative PI did not say so.]

Q4 – Why does your third answer contradict the first answer?

        ‘My third answer does not contradict the first answer. ……’

The full exchange can be seen at Project Infinity Test 1 – Critical Path

Conclusions

This simple test shows at the current level of development Project Infinity has a lot or learning to do:

  • It ignored key PMI documents, in particular the PMI Practice Standard for Scheduling (Third Edition)
     
  • It failed to recognize a direct contradiction in its answers.

Therefore, while LLM tools can help save time bringing together information from diverse sources, their training to date is limited to absorbing information from documents, the next stage of development involving qualifying and grading the data may be a way off. So if you do not know the right answer to a question, you cannot rely on an AI tool using LLM to provide you with a way out.  

To make matters worse, accountability in AI is a complex issue. We know AI systems can misstep in numerous ways, which raises questions about who is responsible? This is a complex legal issue and in the absence of someone else who is demonstrably at fault, you are likely to carry the can!

For more on AI in project management see:  https://mosaicprojects.com.au/PMKI-SCH-033.php#AI


[1] The concept of the critical path having zero total float arose in the 1960s when computer programs were relatively simple and most schedules were manually drawn and calculated. With a single ‘Day Number’ calendar and no constraints the longest path in a network had zero float. The introduction of computer programs in the 1980s that allowed multiple calendars and constraints invalidated this definition.

The evolution of AI

In our previous blog, AI is coming to a project near you!, we identified a large number of project management software applications using Artificial Intelligence (AI) and the rapid spread of the capability. But what exactly is AI? This post offers a brief overview of the concept.   

AI is not as new as some people imagine. Some of the mathematics underpinning AI can be traced back to the 18th century and many of the fundamental concepts were developed in the 20th, but there was very limited use of AI. The ability to make widespread practical use of AI required the development of computers with sufficient processing capabilities to process large amounts of data quickly.  Each of the developments outlined below were enabled by better processors and increased data storage capabilities.

Types of AI

The modern concept of intelligent processing is more than 50 years old, but the way a computer application works depends on the design of the application.  Very broadly:

Decision tables have been in software since the 1960s, the decision table applies a cascading set of decisions to a limited set of data to arrive at a result.  The ‘table’ is hard-wired into the code and does not change. Many resource levelling algorithms are based on decision tables to decide what resources get allocated to which activities on each day.

Expert systems, known as rule-based systems in the 1960s, use explicitly crafted rules to generate responses or data sets based on an initial question. These systems were the basis of many automated Chatbots and help systems. The system’s rules are ‘hard-wired’ and do not change without external intervention.

Data mining was developed in the 1990s. The application uses generalized instructions to look at large volumes of data to discover previously unknown properties. Generalized processes means the data being examined does not need to be labelled or predefined. The application works out a suitable structure, and then draws classes information from within the data set. These system can be interactive, but are not self-learning. Extracting knowledge from ‘big data’ supports Business Intelligence and other business and marketing improvement initiatives.

Machine learning (ML). Basic ML is similar to data mining. It concerned with the development and study of statistical algorithms that can effectively generalize to perform tasks without explicit instructions. ML focuses on prediction and recognition, based on properties the application has learned from training data. The basic functions of ML were defined in the early 2000s and the concept continues to evolve and develop. The basic ML approach can be seen in a range of project management tools where the application recommends durations, lists likely risks, or performs other predictive assessments on your current project, based on data from previous projects.

Algorithmic Decision Making, a subset of ‘expert systems’ focused on using conventional machine learning and statistical techniques such as ordinary least squares, logistic regression, and decision trees to automate traditional human-based decision-making processes. Research suggests, well designed algorithms are less biased and more accurate than the humans they are replacing, improving predictions and decisions, but care is needed; they can also perpetuate blind spots, biases and be built to be fundamentally unfair.

Generative AI (Gen AI) extends the capability of ML. Gen AI uses generative artificial neural networks and ‘deep learning’ to deliver enhanced performance. Gen AI has been applied to large language models (LLM), computer vision, robotics, speech recognition, email filtering, agriculture, medicine, and many other fields. Each branch of development takes the basic principles of Gen AI and adapts them to the specific needs of the researchers. Latest trends are linking different strands of Gen AI to create new things such as generating pictures from verbal descriptions, and linking Gen AI to the IoT (Internet of Things) and additive manufacturing functions to produce computer designed ‘stuff’.

Large language models (LLM) are the branch of generative AI with most direct relevance to project management. LLM uses deep learning algorithms that can perform a variety of natural language processing (NLP) tasks. They are trained using massive datasets which enables them to recognize, translate, predict, or generate text or other content. LLM applications must be pre-trained and then fine-tuned so that they can solve text classifications, answer questions, summarize documents, and generate text, sound, or images. The challenge with LLM is in the training materials, the system only knows what it has been taught. This branch of Gen AI burst into prominence with the development of ChatGPT. Its developer, OpenAI (a research company), launched ChatGPT on November 30, 2022 – a year later and ChatGPT has world-wide attention.

LLM underpins most of todays advanced AI applications and can generate content across multiple types of media, including text, graphics, and video. While early implementations have had issues with accuracy and bias the tools are improving rapidly.

Progress to date indicates that the inherent capabilities of Gen AI will fundamentally change enterprise technology, and how businesses operate. Rather than simply requiring technical skills, employees will also need critical thinking, people skills, and a good understanding of ethics and governance processes.

In the four months from January to April 2023 the percentage of employees in Australia using Gen AI grew from 10% to 35%[1].  This rapid growth in use raises concerns around safety, privacy and security but businesses that do not explore the use of Gen AI in their organisations or industry risk being left behind.

The technological world is becoming very closely integrated:

Source: ACS Australia’s Digital Pulse | A new approach to building Australia’s technology skills – click to expand.

For more on the application of AI to project management software see: https://mosaicprojects.com.au/PMKI-SCH-033.php#AI

Our next post will look at the use of LLM in project management.


[1] Australian Computer Society research 2023. Australia’s Digital Pulse | A new approach to building Australia’s technology skills

Is Fishermans Bend heading for the same public transport disaster as Docklands?

Decades after Docklands was built, getting into and out of the commercial side during rush hour is difficult, and getting on a tram is almost impossible. Similarly, there’s still no direct connection from the residential areas on the North side of the harbour to the commercial areas on the South. For public transport users the only way out is into the city crush.

Now the design for Fishermans Bend is focused on making the existing crush worse. The only tram route out of the ‘employment zone’ (dotted blue lines) passes through Docklands and into the CBD.  Add to this the fact that Packenham, Cranbourne, and Sunbury trains won’t even go through Southern Cross once the Metro Tunnel is open and the ‘orange’ Metro 2 underground trainline is 40+ years away, a rethink is needed. 

The overlooked fact is 80% of the tram tracks exist for a direct link between the Fishermans Bend ‘employment zone’ and the new ANZAC Station at the Domain – follow the yellow brick road.  Fill in the missing links and everyone wanting to travel on the Packenham or Cranbourne trains, or to the SE using the St Kilda Rd trams can bypass the city crush and save time. 

Connectivity from Docklands to anywhere but the CBD was a disaster for the first couple of decades and getting into the CBD was not easy.  Lots of improvement projects later it’s still far from good.  Why is the government making the same mistake in Fishermans Bend? Most people working in the new ‘employment zone’ will not be living in the CBD – so why is the planning focused on cramming everyone through the already overcrowded CBD?

This is a Melbourne grumble……  For more on the Fishermans Bend project see: https://www.fishermansbend.vic.gov.au/

AI is coming to a project near you!

Like it or not, Artificial Intelligence (AI) is coming to a project near to you. As well as adaptations of the generic tools such as ChatGPT many new tools are being released and established tools upgraded to embed AI in various ways. But care is needed – “Garbage in, Garbage out” can still diminish the value of AI.

Generative AI tool that create content based on their ‘large language model’ need to be trained on quality resources rather than the mass of misinformation floating around the internet. This is not hard to do and can generate impressive results (but be careful of copyright).  Most project management tools with embedded AI are built around machine learning (ML) and learn from the data you generate; the more advanced tools then apply AI to create insights.

These enhanced tools bring almost limitless processing capabilities to give meaning to data and help make crucial decisions to achieve project strategies. They can take over various technical tasks allowing project managers to deal with more crucial tasks as well as improving various estimating and risk assessment processes by provide insights from previous projects to enable managers to do a better job. These systems can also assist by keeping track of the project and checking key metrics like budgets, milestones, and other resources.

AI is still improving, and so are AI project management tools. All of this creates numerous benefits for the project management process. However, here are some core benefits you can expect right away. 

  • Better Project Estimations 
  • Improved Scheduling and Planning
  • More reliable Roadmaps and Budgets
  • More Predictability

There are a surprisingly large number of tools embedding ML and/or AI, too many to list here!  What we have done is augment the Mosaic PM Software and Tools listing to highlight tools with some ML or AI capability – look for the blue – AI – in the categorized listings at:
https://mosaicprojects.com.au/PMKI-SCH-030.php 

If you know of any additional tools missing from the list (or tools in the list that should be flagged ‘AI’) let me know and I will update the list.

For more on the application of AI to project management software see: https://mosaicprojects.com.au/PMKI-SCH-033.php#AI

How WPM Works

Work Performance Management (WPM) is a methodology developed by Mosaic Project Services Pty Ltd to offer a simple, robust solution to the challenges of providing rigorous project controls information on projects that cannot (or are not) using CPM and/or EVM. It works by setting an expected rate of working using an appropriate metric, then measuring the actual work achieved to date. Based on this data, WPM can assess how far ahead or behind plan the work currently is, and using this information calculate the likely project completion date and VAC.

The basis of the calculations used in WPM are the same as is used in Earned Schedule (ES), however, WPM is much simpler to set up and use. The only two requirements to implement WPM are:

  • A consistent metric to measure the work planned and accomplished, and
  • A simple but robust assessment of when the work was planned to be done.

Our latest article, How WPM Works, explains in detail the processes and calculations used in WPM, and the outputs produced.

Understanding the current status and projected completion is invaluable management information for Agile and other projects where CPM schedules are not used, and even where a project has a good CPM schedule in place this additional information is useful. Then by plotting the trends for both the current variance (WV) and VAC management also knows how the project is tracking overall.

Download How WPM Works: https://mosaicprojects.com.au/Mag_Articles/AA038_-_How_WPM_Works.pdf

For more on WPM see: https://mosaicprojects.com.au/PMKI-SCH-041.php#Overview