Lies, Damned Lies and Statistics*

While the source of this quotation is questionable how you present data to management can really change the message without distorting the underlying data.  A peer reviewed study of COVID-19 deaths of people over 50 in NSW, Australia, by PLOS One[1] shows:

Anti-Vax: More vaccinated people died than unvaccinated.  A semi-true statement, but omits the fact that more than 95% of the population had at least one vaccination. 

Anti Vax: There is no real need to get vaccinated, the death rate is very low 0.38%.  True, but this average brings in the three vaccinated groups to offset the one unvaccinated group.

Pro-Vax: You are nearly 10 times more likely to die of COVID-19 if you are not vaccinated, compared to if you are fully vaccinated. Also true!

What is the real situation?  Looking at the chart at the top of this post shows a dramatic difference between vaccinated and unvaccinated people. The unvaccinated had a 1.03% chance of dying in the study year! This is much higher than the 0.093% probability of the fully vaccinated.  And by way of comparison, both are much higher than the probability of dying in a vehicle accident in Australia, which is 0.00445% per year.   

The population of NSW is over 8 million, from a public health perspective, the reduction in the demand for beds between vaccinated and unvaccinated is massive – literally 1000s of hospital beds were not required, particularly considering the numbers of people who get sick and live.  Vaccination saved the health system from collapse.

From a personal perspective the decision is more nuanced. The risk of COVID is relatively low and has reduced significantly since this study, but this low risk can be reduced by a factor of 10 by being vaccinated; offset by a one-in-several-million chance of an adverse reaction to the vaccine.  We did the numbers and decided to be fully vaccinated. Others decided to accept the relatively low risk of COVID.

But what does this tell you about project performance data?  Getting accurate data is one thing how you present this information is another.  Far too many controls people stop at the first point – data, and do not think through what message they need to communicate to management, the COVID data supports all three messages above, but the one that really matters is in the graph and the last point. Effectively communicating controls information is a skill in itself, see : Reporting & Communicating Controls Information

*Lies, Damned Lies and Statistics

Neither of the above options seem to be correct according to the University of York: https://www.york.ac.uk/depts/maths/histstat/lies.htm


[1] The full study is at: https://journals.plos.org/plosone/article?id=10.1371%2Fjournal.pone.0299844#pone-0299844-g004

Assessing Delays in Agile & Distributed Projects

Published in the April edition of PM World Journal,  our latest paper, Assessing Delays in Agile & Distributed Projects looks at the challenge of assessing delay and disruption in projects where a CPM schedule is not being used. 

For the last 50 or more years, the internationally recognized approaches to assessing delay and disruption has been based on the forensic assessment of a CPM schedule. However, the methods detailed in RP 29R-03 Forensic Schedule Analysis and the SCL Delay and Disruption Protocol do not offer a viable option for assessing delay and disruption in a wide range of projects including:
– Projects managed using an agile or iterative approach to development.
– Distributed projects where there is no particular requirement for large parts of the work to be completed in any particular order.
– Projects managed using Lean Construction and similar approaches where the forward planning is iterative and short term.

With Agile becoming mainstream, the number of commercial contracts requiring the delivery of a defined output, within specified time and cost parameters, that use management methodologies that do not support CPM scheduling are increasing. But, the law of contract will not change. If the contractor fails to comply with the contract requirements to deliver the defined scope, within the required time the contractor will be in breach of contact and liable to pay damages to the client for that breach.  A number of IT companies have already been successfully sued for failing to comply with their contractual obligations, the risk is real. 

One way to minimize the contractor’s exposure to damages, is to claim extensions of time to the contract for delay events, particularly delays caused by the client.  What has been missing in the literature to this point in time is a set of processes for calculating the delay in the absence of a viable CPM schedule. The good news is the law was quite capable of sorting out these questions before CPM was invented, what’s missing in the current situation is an approach that can be used to prove a delay in agile or distributed projects.

This paper defines a set of delay assessment methods that will provide a robust and defensible assessment of delay in this class of project where there simply is no viable CPM schedule. The effect of any intervening event is considered in terms of the delay and disruption caused by the loss of resource efficiency, rather than its effect on a predetermined, arbitrary, sequence of activities.

Download Assessing Delays in Agile & Distributed Projects and supporting materials from: https://mosaicprojects.com.au/PMKI-ITC-020.php#ADD

Understanding the Iron Triangle and Projects

The concept of the iron triangle as a framework for managing projects has long past its use-by date, almost everyone, including PMI, recognize the challenges faced by every project manage are multi-faceted, and multi-dimensional – three elements are not enough. But dig back into the original concepts behind the triangle, and you do uncover a useful framework for defining a project, and separating project management from general management.

The Iron Triangle

The origin of the project management triangle is accredited to Dr. Martin Barnes. In 1969[1], he developed the triangle as part of his course ‘Time and Money in Contract Control’ and labelled three tensions that needed to be balanced against each other in a project: time, cost, and output (the correct scope at the correct quality). The invention of the iron triangle is discussed in more depth in The Origins of Modern Project Management.

The concept of the triangle was widely accepted, and over the years different versions emerged:

  • The time and cost components remained unchanged, but the ‘output’ became variously scope or quality and then to scope being one of the three components and quality in the middle of the triangle (or visa versa). These changes are semantic:
    • you have not delivered scope unless the scope complies with all contractual obligations including quality requirements
    • achieving quality required delivering 100% of what is required by the client.

  • The shift from tensions to constraints changes the concept completely. A constraint is something that cannot be changed, or requires permission to change. Tensions vary based on external influences, the three tensions can work together or against each other.  
     
  • The introduction of iron!  It seems likely, the iron triangle, based on the concept of the iron cage from Max Weber’s 1905 book The Protestant Ethic and the Spirit of Capitalism’s English translation (1930). The iron cage traps individuals in systems based purely on goal efficiency, rational calculation, and control[2].

So, while the concept of the iron triangle and/or triple constraint has been consigned to history, the original concept of the triangle, as a balance between the three elements that are always present in a project still has value.

Defining a project

Understanding precisely what work is a project, and what is operational (or some other forms of working) is becoming increasingly important as various methodologies such as Lean and Agile span between the operations and project domains. 

Some of the parameters used to define or categorize projects include:

There are many other ways to categorize projects, some of which are discussed in the papers at: https://mosaicprojects.com.au/PMKI-ORG-035.php#Class. But these classifications do not really provide a concise definition of a project. And, while there are many different definitions, we consider the best definition of a project to be: 

Project:  A temporary organization established to accomplish an objective, under the leadership of a person (or people) nominated to fulfil the role of project manager[3].

The two key elements being:

  1. The project is temporary, the project delivery team or organization closes and its people reallocated, or terminated, when the objective is achieved, and

  2. The project is established to accomplish an objective for a client which may be internal or external to the delivery organization.

The concept of a project being temporary is straightforward (even though it is often ignored). Departments that are set up and funded to maintain and enhance plant, equipment and/or software systems on an ongoing basis are not projects, and neither is most of their work. We discussed this several years back in De-Projectizing IT Maintenance, and the concept seems to be becoming mainstream with the concept of ‘flow’ being introduced to Disciplined Agile.

The value of the project management triangle is identifying the three mandatory elements needed to describe the client’s objective. There are usually a number of additional elements but if any these three are not present, the work is not a project:

  1. The expected outcome is understood. The project is commissioned to deliver a change to the status quo. This may be something new, altered, and/or removed; and the outcome may be tangible, intangible or a combination.  The outcome does not need to be precisely defined to start a project, and may evolve as the project progresses, but the key element is there is an understood objective the project is working to achieve.  
      
  2. There is an anticipated completion time. This may be fixed by a contract completion date, or a softer target with some flexibility.  However, if there are no limitations on when the work is expected to finish (or it is assumed to be ongoing), you are on a journey, not working on a project. 
     
  3. There is an anticipated budget to accomplish the work. Again, the budget may be fixed by a contract, or a softer target with some flexibility.  The budget is the client view of the amount they expect to pay to receive the objective. The actual cost of accomplishing the work may be higher or lower and who benefits or pays depends on the contractual arrangements. 

Conclusion

Understanding the difference between project work and non-project work is important.  Project overheads are useful when managing the challenges of delivering a project, including dealing with scope creep, cost overruns, schedule slippage, and change in general. The mission of the project team is to deliver the agreed objective as efficiently as possible.

The objective of an operational department is to maintain and enhance the organizational assets under its control. This typically needs different approaches and focuses on a wider range of outcomes.  Many techniques are common to both operational and project work, including various agile methodologies and lean, and many management traits such as agility are desirable across the spectrum.  The difference is understanding the overarching management objectives, and tailoring processes to suite.  

For more on project definition and classification see:
https://mosaicprojects.com.au/PMKI-ORG-035.php#Overview


[1] We published and widely circulated this claim after a meeting with Dr. Barns in 2005 at his home in Cambridge. So far no one has suggested an alternative origin.  

[2] For more on the work of Max Weber, see The Origins of Modern Management: https://mosaicprojects.com.au/PDF_Papers/P050_Origins_of_Modern_Management.pdf

[3] The basis of this definition is described in Project Fact or fiction: https://mosaicprojects.com.au/PDF_Papers/P007_Project_Fact.pdf 

Waterfall is Dead

The PMI 2024 Pulse of the Profession has introduced a framework for categorizing projects based on the management approach being used of: Predictive – Hybrid – Agile.  If generally adopted, this framework will at long last kill of the notion of waterfall as a project delivery methodology.

As shown in our historical research The History of Agile, Lean, and Allied Concepts, the idea of waterfall as a project delivery methodology was a mistake, and its value as a software development approach was limited.

The PMI framework has some problems but the predictive project delivery paradigm is described as focused on schedule, scope, and budget. The projects tend to use a phase-based approach and are plan driven.  This describes most hard projects and many soft projects that are not using an unconstrained agile approach.

For a detailed review of the PMI 2024 Pulse of the Profession report, and how the classification system works see How should the different types of project management be described?, download from: https://mosaicprojects.com.au/Mag_Articles/AA026_How_should_different_types_of_PM_be_described.pdf

For more on project classification see: https://mosaicprojects.com.au/PMKI-ORG-035.php#Class

WPM for Lean & Distributed Projects

The core concept underlaying the Critical Path Method (CPM) is there is one best way to undertake the work of the project and this can be accurately modelled in the CPM schedule. This premise does not hold for either distributed projects, or projects applying Lean Construction management. These two types of projects differ, lean is a management choice, whereas distributed projects are a physical fact:
–  Distributed projects are ones where the physical distribution of the elements to be constructed means significant amounts of the work can be done in any sequence and changing the sequence when needed is relatively easy.
–  Lean construction is a project delivery process that uses Lean methods to maximize stakeholder value and reduce waste by emphasizing collaboration between everyone involved in a project. To achieve this the work is planned and re-planned as needed by the project team focusing on optimising production.

In both cases, the flexibility in the way the detailed work is performed and the relative ease with which the sequence can be changed means CPM is ineffective as a predictor of status and completion.

Our latest article WPM for Lean & Distributed Projects looks at how Work Performance Management (WPM) can be used to assess both the current status and the projected completion for these types of project, regardless of the number of sequence changes made to the overall plan.

Download WPM for Lean & Distributed Projects from: https://mosaicprojects.com.au/PMKI-SCH-041.php#WPM-Dist

See more on WPM as a valuable tool to add to you project controls system: https://mosaicprojects.com.au/PMKI-SCH-041.php#Overview

Ethics and Governance in Action


The best governed organizations will have ethical failures, even criminal activities, occurring from time to time. When an organization employs 1000s of people there will always be some who make mistakes or choose to do the wrong thing.  The difference between a well governed organization with a strong ethical framework and the others is how they deal with the issues.

The Bad

Over the last few months there has been a lot of commentary on major ethical failures by some of the ‘big 4’ accountancy firms (see: The major news story everyone missed: KPMG hit with record fine for their role in the Carillion Collapse). With a common theme being attempts by the partners running these organizations to minimize their responsibility and deflect blame. As a consequence, there have been record fines imposed on KPMG and massive long-term reputational damage caused to PWC by the Australian Tax Office scandal.

The Good

The contrast with the way the Jacobs Group (Australia) Pty Ltd (Jacobs Group) has managed an equally damaging occurrence could not be starker! Jacobs Group had pleaded guilty to three counts of conspiring to cause bribes to be offered to foreign public officials, contrary to provisions of the Criminal Code Act 1995 (Cth). But, the exemplary way this issue has been managed is an example for all.

Offering bribes to foreign public officials has been a criminal offence in Australia since 1995, and the Crimes Legislation Amendment (Combatting Foreign Bribery) Bill 2023 has just passed into law significantly increasing penalties.

Despite this, between 2000 and 2012, SKM was involved in two conspiracies in the Philippines and Vietnam. Both conspiracies involved employees of SKM’s overseas development assistance businesses (the SODA business unit) paying bribes to foreign public officials in order to facilitate the awarding of public infrastructure project contracts to SKM. SKM employees embarked on a complex scheme to conceal the bribes by making payments to third party companies, and receiving fake invoices for services which were not in fact rendered. The conduct was known to and approved by senior persons at SKM, although concealed from the company more widely.

Jacobs Group acquired SKM in 2013, after the conduct had ceased. During the vendor due diligence processes, the conduct came to the attention of persons outside those involved in the offending, and the company’s external lawyers.

Despite the lawyers findings being subject to legal privilege, and the very remote possibility of the Australian Authorities discovering the crime, the non-conflicted directors unanimously voted to self-report the findings to the Australian Federal Police (AFP), to waive legal privilege in the draft report, and to make it available to the AFP. The company also reported the findings of its investigation to a number of other authorities, including the World Bank, Asian Development Bank, AusAid, and ASIC.

The company and a number of individuals were charged in 2018, and Jacobs pleaded guilty to three counts of conspiring to cause bribes to be offered to foreign public officials. The matter only came to our attention because of a recent High Court ruling dealing with technical issues around the calculation of the fine to be paid by Jacobs.

When Justice Adamson in the New South Wales Supreme Court sentenced the company on 9 June 2021. She found that while each of the offences committed fell within the mid-range of objective seriousness for an offence, this was mitigated by the fact that the company had self-reported the offending to authorities, and that the self-reporting was motivated by remorse and contrition rather than fear of discovery. The sentencing judge also found that the conduct was not widespread, and effectively limited to the SODA business unit. She accepted evidence from the AFP that it was unlikely to have become aware of the conduct absent the company’s self-reporting, and that the company’s post offence conduct was “best practice” and “of the highest quality”.

Based on these findings the amount of the fine to be paid by Jacobs is likely to be in the region of $3 million – a massive discount from the potential maximum that, based on the High Court decision, is likely to exceed $32 million.

Lessons on Governance and Ethics

The approach taken by Jacobs Group, following the identification of potential criminal conduct, is a useful guide as to how an ethical organization works:

  1. The prompt retention of independent external lawyers to investigate suspected instances of criminal misconduct.
  2. The decisions of the board of directors to self-report the conduct to authorities and provide ongoing assistance and cooperation to law enforcement and prosecutorial authorities, notwithstanding the risk of criminal sanction.
  3. Committing to remediation steps to address the conduct (and seeking to prevent any repeat of it), including by overhauling relevant policies and procedures and making appropriate operational changes including:
  • suspending and then terminating relevant individual employees who had participated in the conduct;
  • operational changes to management and oversight of the SODA business unit that had been involved in the conduct, and changing approval processes for all payments by that unit;
  • introducing a new Code of Conduct which explicitly prohibited the offering of inducements to public officials;
  • introducing a requirement for the completion of a bribery and corruption risk assessment before committing to new projects;
  • upgrading various internal policies, including the company’s whistleblower, donations and gifts and entertainment policies. It also introduced new policies which discouraged the use of agents, and required the screening of all new suppliers and sub-consultants for bribery and corruption risk. The company also engaged an independent monitor to review the changes made to its policies;
  • updating and expanding existing bribery and corruption training programs for staff; and
  • modifying internal audit practices to more closely scrutinize non-financial risks, such as bribery and corruption.

One definition of ethical behaviour is doing the right thing when no one is looking. The contrast between Jacobs and KPMG’s outcomes is a lesson worth remembering.

For more on governance and organizational ethics see: https://mosaicprojects.com.au/PMKI-ORG-010.php#Overview

White Constructions v PBS Holdings Revisited

White Constructions Pty Ltd v PBS Holdings Pty Ltd [2019] NSWSC 1166, involved a claim for delay and costs arising out of a contract to design a sewerage system for a subdivision and submit it for approval. The alleged breach was the failure to create and submit a sewer design acceptable to the approval authority which had the effect of delaying completion of the subdivision, giving rise to a claim for damages by White.

White and PBS both appointed experts to undertake a schedule analysis, and they did agree an ‘as-built’ program of the works but disagreed on almost everything else including the delay analysis method to use, the correct application of the methods, and the extent of the overall project delay caused by the delays in approving the sewer design.

The Judge found:

[Clause 18]      Plainly, both experts are adept at their art. But both cannot be right. It is not inevitable that one of them is right.
[Note: This approach is consistent with the UK court decision of Akenhead J in Walter Lilly & Company Ltd v Mckay [2012] EWHC 1773 (TCC) at [377], “the court is not compelled to choose only between the rival approaches and analyses of the experts. Ultimately it must be for the court to decide as a matter of fact what delayed the works and for how long”. This precedent has been followed on a number of occasions[1].]

[Clause 22]      The expert reports are complex. To the unschooled, they are impenetrable. It was apparent to me that I would need significant assistance to be put in a position to critically evaluate their opinions and conclusions.

[Clause 25]      Under UCPR r 31.54, the Court obtained the assistance of Mr Ian McIntyre (on whose appointment the parties agreed).

[Clause 137]   The major components of the works were:
       • earthworks,
       • roadworks and kerbing,
       • sewerage,
       • electrical and National Broadband Network (NBN) installation,
       • footpaths, and
       • landscaping.,

[Clause 138]   The electrical and NBN installation was contracted to and carried out by an organisation called Transelect. Landscaping was contracted to RK Evans Landscaping Pty Ltd. The as-built program is not in dispute.
[Note: the rest of the work was undertaken by other contractors]

[Clause 184]   White bears the onus of establishing that it suffered loss and the quantum of it.

[Clause 185]   White’s damages are based on delay to the whole project, said to be attributable to the late (underbore) sewer design. This is not the type of subject upon which precise evidence cannot be adduced. [Therefore] It is not a subject which involves the Court having to make an estimation or engage in some degree of guesswork.

[Clause 188]   The descriptions of the methods adopted by Shahady and Senogles respectively are evidently derived from the publication of the United Kingdom Society of Construction Law, the Delay and Disruption Protocol….

[Clause 191]   Mr McIntyre’s opinion, upon which I propose to act, is that for the purpose of any particular case, the fact that a method appears in the Protocol does not give it any standing, and the fact that a method, which is otherwise logical or rational, but does not appear in the Protocol, does not deny it standing.
[Note: this is the same wording as an express statement contained in the Delay and Disruption Protocol]

[Clause 195]   Mr McIntyre’s opinion, upon which I propose to act, is that neither method [used by the parties experts] is appropriate to be adopted in this case.

[Clause 196]   Mr McIntyre’s opinion, upon which I propose to act, is that close consideration and examination of the actual evidence of what was happening on the ground will reveal if the delay in approving the sewerage design actually played a role in delaying the project and, if so, how and by how much. In effect, he advised that the Court should apply the common law common sense approach to causation In effect, he advised that the Court should apply the common law common sense approach to causation referred to by the High Court in March v E & MH Stramare Pty Ltd (1991) 171 CLR 506.

[Clause 197]   The Court is concerned with common law notions of causation. The only appropriate method is to determine the matter by paying close attention to the facts, and assessing whether White has proved, on the probabilities, that delay in the underboring solution delayed the project as a whole and, if so, by how much.

[Clause 198]   This requires it to establish that:
• the whole project would have been completed by 15 July 2016,
• the final sewer approval delay delayed sewer works,
• the sewer works delay prevented non-sewer works from otherwise proceeding, that is, that the programme could not reasonably have been varied to accommodate the consequences of late approval, and
• other works could not have been done to fill downtimes so as to save time later.

[Clause 199]   ……… White has failed to discharge this burden.

Summary

The factors required to prove a delay outlined by the Judge at Clause 198 can be generalised as follows:

  1. The completion date for the project before the delay event occurred has to be known with some certainty.
  2. The delay event has to be shown to cause a delay which flowed through to extend the overall project completion date.
  3. There were not reasonable alternative ways of working that could mitigate the effect of the delay on project completion.

More significant, none of these steps needs a CPM schedule.  The project status and the effect of the disruption on project completion can be assessed based on its effect on the productivity of key resources. This is discussed in Assessing Delays in Agile & Distributed Projects: https://mosaicprojects.com.au/PDF_Papers/P215_Assessing_Delays_In_Agile_+_Distributed_Projects.pdf   


[1]     This approach by the courts is discussed in Delivering Expert Evidence is Becoming Harder: https://mosaicprojects.com.au/Mag_Articles/AA028_Delivering_Expert_Evidence.pdf

The Artificial Intelligence Ecosystem

We have posted a number of times discussing aspects of Artificial Intelligence (AI) in project management, but what exactly is AI?  This post looks at the components in the AI ecosystem and briefly outlines what the various terms mean.

𝗔𝗿𝘁𝗶𝗳𝗶𝗰𝗶𝗮𝗹 𝗜𝗻𝘁𝗲𝗹𝗹𝗶𝗴𝗲𝗻𝗰𝗲: a range of computer algorithms and functions that enable computers and machines to mimic the problem-solving and decision-making capabilities of the human mind.

Automatic Programming: is a technology that enables computers to generate code or write programs with minimal human intervention.

Knowledge Representation: is concerned with representing information about the real world in a way that a computer can understand, so it can utilize this knowledge and behave intelligently.

Expert System: is a computer system emulating the decision-making ability of a human expert. A system typically includes: a knowledge base, an inference engine that applies logical rules to the knowledge base to deduce new information, an explanation facility, a knowledge acquisition facility, and a user interface.

Planning and Scheduling: an automated process that achieves the realization of strategies or action sequences that are complex and must be discovered and optimized in multidimensional space, typically for execution by intelligent agents, autonomous robots, and unmanned vehicles.

Speech Recognition: the ability of devices to respond to spoken commands. Speech recognition enables hands-free control of various devices, provides input to automatic translation, and creates print-ready dictation.

Intelligent Robotics: robots that function as an intelligent machine and it can be programmed to take actions or make choices based on input from sensors.

Visual Perception: enables machines to derive information from, and understand images and visual data in a way similar to humans

Natural Language Processing (NLP): gives computers the ability to understand text and spoken words in much the same way human beings can.

Problem Solving & Search Strategies: Involves the use of algorithms to find solutions to complex problems by exploring possible paths and evaluating the outcomes. A search algorithm takes a problem as input and returns a solution in the form of an action sequence.

𝗠𝗮𝗰𝗵𝗶𝗻𝗲 𝗟𝗲𝗮𝗿𝗻𝗶𝗻𝗴: is concerned with the development and study of statistical algorithms that allow a machine to be trained so it can learn from the training data and then generalize to unseen data, to perform tasks without explicit instructions. There are three basic machine learning paradigms, supervised learning, unsupervised learning, and reinforcement learning.

• Supervised learning: is when algorithms learn to make decisions based on past known outcomes. The data set containing past known outcomes and other related variables used in the learning process is known as training data.

• Unsupervised learning: is a type of machine learning that learns from data without human supervision. Unlike supervised learning, unsupervised machine learning models are given unlabelled data and allowed to discover patterns and insights without any explicit guidance or instruction.

Reinforcement Learning (RL): is an interdisciplinary area of machine learning concerned with how an intelligent agent ought to take actions in a dynamic environment to maximize the cumulative reward.

Classification: a process where AI systems are trained to categorize data into predefined classes or labels.

K-Means Clustering: cluster analysis is an analytical technique used in data mining and machine learning to group similar objects into related clusters.

Principal Component Analysis (PCA): is a dimensionality reduction method used to reduce the dimensionality of large data sets, by transforming a large set of variables into a smaller one that still contains most of the information in the large set.

Automatic Reasoning: attempts to provide assurance about what a system or program will do or will never do based on mathematical proof.

Decision Trees:  is a flow chart created by a computer algorithm to make decisions or numeric predictions based on information in a digital data set.

Random Forest: is an algorithm that combines the output of multiple decision trees to reach a single result. It handles both classification and regression problems.

Ensemble Methods: are techniques that aim at improving the accuracy of results in models by combining multiple models instead of using a single model. The combined models increase the accuracy of the results significantly.

Naive Bayes: is a statistical classification technique based on Bayes Theorem. It is one of the simplest supervised learning algorithms.

Anomaly Detection: the identification of rare events, items, or observations which are suspicious because they differ significantly from standard behaviours or patterns.

𝗡𝗲𝘂𝗿𝗮𝗹 𝗡𝗲𝘁𝘄𝗼𝗿𝗸𝘀: are machine learning (ML) models designed to mimic the function and structure of the human brain and help computers gather insights and meaning from text, data, and documents by being trained to recognising patterns and sequences.

Large Language Model (LLM): is a type of neural network called a transformer program that can recognize and generate text, answer questions, and generate high-quality, contextually appropriate responses in natural language. LLMs are trained on huge sets of data.

Radial Basis Function Networks: are a type of neural network used for function approximation problems. They are distinguished from other neural networks due to their universal approximation and faster learning speed.

Recurrent Neural Networks (RNN): is a type of neural network where the output from the previous step is used as input to the current step. In traditional neural networks, all the inputs and outputs are independent of each other. For example, when it is required to predict the next word of a sentence, the previous words are required and hence there is a need to remember the previous words.

Autoencoders: is a type of neural network used to learn efficient coding of unlabelled data (unsupervised learning). An autoencoder learns two functions: an encoding function that transforms the input data into code, and a decoding function that recreates the input data from the encoded representation.

Hopfield Networks: is a recurrent neural network having synaptic connection pattern such that there is an underlying Lyapunov function (method of stability) for the activity dynamics. Started in any initial state, the state of the system evolves to a final state that is a (local) minimum of the Lyapunov function.

Modular Neural Networks: are characterized by a series of independent neural networks moderated by some intermediary to allow for more complex management processes.

Adaptive Resonance Theory (ART): is a theory developed to address the stability-plasticity dilemma. The terms adaptive and resonance means that it can adapt to new learning (adaptive) without losing previous information (resonance).

Deep Learning:  is a method in artificial intelligence (AI) that teaches computers to process data in a way that is inspired by the human brain. Deep learning models can recognize complex patterns in pictures, text, sounds, and other data to produce accurate insights and predictions. The adjective deep refers to the use of multiple layers in the network.

Transformer Model:  is a neural network that learns context and thus meaning by tracking relationships in sequential data by applying an evolving set of mathematical techniques to detect subtle ways even distant data elements in a series influence and depend on each other.

Convolutional Neural Networks (CNN): is a regularized type of feed-forward neural network that learns feature engineering by itself via filters or kernel optimization.

Long Short-Term Memory Networks (LSTM): is a recurrent neural network (RNN), aimed to deal with the vanishing gradient problem present in traditional RNNs.

Deep Reinforcement Learning: is a subfield of machine learning that combines reinforcement learning (RL) and deep learning.

Generative Adversarial Networks (GAN): is a class of machine learning frameworks for approaching generative AI. Two neural networks contest with each other in the form of a zero-sum game, where one agent’s gain is another agent’s loss.  Given a training set, this technique learns to generate new data with the same statistics as the training set. A GAN trained on photographs can generate new photographs that look at least superficially authentic.

Deep Belief Networks (DBN): are a type of neural network that is composed of several layers of shallow neural networks (RBMs) that can be trained using unsupervised learning. The output of the RBMs is then used as input to the next layer of the network, until the final layer is reached. The final layer of the DBN is typically a classifier that is trained using supervised learning. DBNs are effective in applications, such as image recognition, speech recognition, and natural language processing.

For more discussion on the use of AI in project management see:
https://mosaicprojects.com.au/PMKI-SCH-033.php#AI-Discussion

One Defence Data – Another ‘Big Consultant’ issue?

Hidden in the pre-Christmas holiday fun, the ABC[1] published an ‘investigations exclusive’ by Linton Besser and defence correspondent Andrew Greene[2] that needs more attention.

It appears project ICT2284 (One Defence Data), a $515 million project to unify and exploit the data resources held by the Department of Defence is in trouble due to Hastie (pun intended) decisions made before the last election. Within this overall project, a $100 million One Defence Data “systems integrator” contract was awarded to KPMG Australia Technologies Solutions on the eve of the last federal election, and the then assistant minister for defence, Andrew Hastie, announced KPMG’s contract, promising it would “deliver secure and resilient information systems”.

This award was made after KPMG had been paid $93 million Between 2016 and 2022, for consulting work on a range of strategic advice, which included the development of ICT2284 and its failed forerunner, known as Enterprise Information Management, or EIM.

Unsurprisingly, the review by Anchoram Consulting highlighted both governance and procedural issues including:

  • The project has been plagued by a “lack of accountability” and conflicts of interest.
  • The documents suggest there is profound confusion inside Defence about who is in charge and what is actually being delivered.
  • Core governance documents have not been signed off and key requirements of KPMG’s contract have been diluted from “mandatory” to “desirable”, sometimes in consultation with KPMG itself.
  • The project had been “retrospectively” designed to justify a $100 million contract that was issued to KPMG Australia Technologies Solutions, or KTech, exposing the department to “significant risk”.

The heart of ICT2284’s problem appears to be the project’s fundamental design work had “not been done due to … the rush to meet deadlines tied to the Cabinet submission and related procurement activities”, with “no understood and agreed, desired end-state”.

Predictably both the area of Defence running the project, known as CIOG, or the Chief Information Officer Group, and KPMG reject the report findings.

The full ABC report is at: https://amp.abc.net.au/article/103247476

From a governance perspective the biggest on-going issue appears to be the lack of capability within CIOG and government generally to manage this type of complex project. The downsizing and deskilling of the public service has been on-going for decades (under both parties). This means the outsourcing of policy development to the big consultancies is inevitable, and their advice will be unavoidably biased towards benefitting them.

The actions by the current government to reverse this trend are admirable but will take years to be effective. In the meantime, we watch.

For more on governance failures see: https://mosaicprojects.com.au/PMKI-ORG-005.php#Process4

For good governance practice see: https://mosaicprojects.com.au/PMKI-ORG-005.php#Process3


[1] Australian Broadcasting Corporation

[2] Posted Tue 19 Dec 2023 at 6:41pm:

A Brief History of Agile

The history of agile software development is not what most people think, and is nothing like the story pushed by most Agile Evangelists.

Our latest publication A Brief History of Agile shows that from the beginning of large system software development the people managing the software engineering understood the need for prototyping and iterative and incremental development. This approach has always been part of the way good software is developed.

The environment the authors of the early papers referenced and linked in the article were operating in, satellite software and ‘cold-war’ control systems, plus the limitations of the computers they were working on, did require a focus on testing and documentation – it’s too late for a bug-fix once WW3 has started…..  But this is no different to modern day control systems development where people’s lives are at stake. Otherwise, nothing much has changed, good software is built incrementally, and tested progressively,  

The side-track into ‘waterfall’ seems to have bee started by people with a focus on requirements management and configuration management, both approached from a document heavy bureaucratic perspective. Add the desire of middle-management for the illusion of control and you get waterfall imposed on software developers by people who knew little about the development of large software systems. As predicted in 1970, ‘doing waterfall’ doubles to cost of software development. The fact waterfall survives in some organisations through to the present time is a factor of culture and the desire for control, even if it is an illusion.

The message from history, echoed in the Agile Manifesto, is you need to tailor the documentation, discipline, and control processes, to meet the requirements of the project. Developing a simple website with easy access to fix issues is very different to developing the control systems for a satellite that is intended to work for years, millions of miles from earth.

To read the full article and access many of the referenced papers and third-party analysis see: https://mosaicprojects.com.au/PMKI-ZSY-010.php#Agile