Computers before ENIAC

The foundation of modern computing is the designs of Charles Babbage. While neither machine was built (the ‘Difference Engine’ was built in the 1990s), Augusta Ada King, Countess of Lovelace, better known as Ada Lovelace, was able to write a computer program to run on the machine.

Babbage’s analytical engine

Babbage’s analytical engine was a proposed digital mechanical general-purpose computer designed by English mathematician and computer pioneer Charles Babbage. It was first described in 1837 as the successor to Babbage’s difference engine, which was a design as a simpler mechanical calculator. The analytical engine incorporated an arithmetic logic unit, control flow in the form of conditional branching and loops, and integrated memory, making it the first design for a general-purpose computer that could be described in modern terms as Turing-complete.  Neither of Babbage’s designs were built at the time.

Note:  Mechanical calculating machines became increasingly sophisticated and common in the 20th century.  

The trigger for modern computers seems to have been the development of telephone relays and then vacuum valves in the 1930s. This led to:

Zuse Apparatebau (Zuse Apparatus Construction)

Konrad Ernst Otto Zuse was a German civil engineer, pioneering computer scientist who invented the world’s first programmable computer.

In 1938, he finished the Z1 which contained some 30,000 metal parts and never worked well due to insufficient mechanical precision. In September 1940 Zuse presented the Z2, covering several rooms in the parental flat, to experts of the Deutsche Versuchsanstalt für Luftfahrt (DVL; German Research Institute for Aviation).  The Z2 was a revised version of the Z1 using telephone relays.

In 1940, Zuse built the S1 and S2 computing machines, which were special purpose devices which computed aerodynamic corrections to the wings of radio-controlled flying bombs. The S2 featured an integrated analog-to-digital converter under program control, making it the first process-controlled computer.

In 1941, he improved on the basic Z2 machine, and built the Z3. The Z3 was a binary 22-bit floating-point calculator featuring programmability with loops but without conditional jumps, with memory and a calculation unit based on telephone relays. Despite the absence of conditional jumps, the Z3 was a Turing complete computer[1].

It was not until 1949 that Zuse was able to resume work on the Z4. He would show the computer to the mathematician Eduard Stiefel of the ETH Zurich. The two men settled a deal to lend the Z4 to the ETH. The Z4 was finished and delivered to the ETH Zurich in July 1950, where it proved very reliable. At that time, it was the only working digital computer in Central Europe.

Atanasoff–Berry computer (ABC)

Conceived in 1937, the Atanasoff–Berry computer (ABC) was built by Iowa State College mathematics and physics professor John Vincent Atanasoff with the help of graduate student Clifford Berry. It was designed to solve systems of linear equations and was successfully tested in 1942. This was the first automatic electronic digital computer, but was limited by the technology of the day, it was neither programmable, nor Turing-complete. Conventionally, the ABC would be considered the first electronic ALU (arithmetic logic unit) – which is integrated into every modern processor’s design. The final version was ready in 1942, using binary arithmetic, regenerative memory, parallel processing, logic circuits, separation of memory and computing functions etc.

In 1940, Dr. John Mauchly asked Dr. Atanasoff to present him the ABC machine. The latter agreed, but this was a subtle request, as Dr. Mauchly used many basic ideas of Dr. Atanasoff in the future design of the ENIAC, together with Dr. Presper Eckert.

Operational in 1946, ENIAC was initially considered the world’s 1st electronic digital computer. Many years later, some charges of piracy were brought against Dr. Mauchly. In 1973 a trial verdict concluded that ENIAC computer was “derived” from the ideas of Dr. Atanasoff. See – https://jva.cs.iastate.edu/courtcase.php  

When Judge Larson distributed the formal opinion on 19 October 1973, it was a clear and unequivocal finding that Mauchly’s basic ENIAC ideas were “derived from Atanasoff, and the invention claimed in ENIAC was derived from Atanasoff.”

The Harvard Mark I (pictured),

 The Harvard Mark I, or IBM Automatic Sequence Controlled Calculator (ASCC), was one of the earliest general-purpose electromechanical computers used in the war effort during the last part of World War II.

Howard Aiken was shown a demonstration set that Charles Babbage’s son had given to Harvard University 70 years earlier, which led him to study Babbage and to add references to the Analytical Engine to his proposal; the resulting machine “brought Babbage’s principles of the Analytical Engine almost to full realization, while adding important new features.”

The original concept was presented to IBM by Howard Aiken in November 1937. After a feasibility study by IBM engineers, the company chairman Thomas Watson Sr. personally approved the project and its funding in February 1939. The ASCC was developed and built by IBM at their Endicott plant and shipped to Harvard in February 1944. It began computations for the US Navy Bureau of Ships in May and was officially presented to the university on August 7, 1944. Programming was by punch tape this example shows how ‘bugs’ were corrected by patching the tap.

One of the first programs to run on the Mark I was initiated on 29 March 1944 by John von Neumann. At that time, von Neumann was working on the Manhattan Project, and needed to determine whether implosion was a viable choice to detonate the atomic bomb that would be used a year later. The Mark I also computed and printed mathematical tables, which had been the initial goal of British inventor Charles Babbage for his “analytical engine” in 1837.

A quick list of all early computers from History of computing hardware § Early digital computer characteristics.

Babbage Difference engine           Design 1820s (Not built until the 1990s)
Babbage Analytical Engine            Design 1830s Not built
Torres’ Analytical Machine            1920
Zuse Z1 (Germany)                          1939    
Bombe (Poland, UK, US)                 1939 (Polish), March 1940 (British), May 1943 (US) – Used to decode Enigma, discussed in A Brief History of Agile.
Zuse Z2 (Germany)                          1940    
Zuse Z3 (Germany)                          May 1941
Atanasoff–Berry Computer (US)   1942  
Colossus Mark 1 (UK)                     December 1943 – Used to decode the Lorenze Cypher at Bletchley Park UK, discussed in A Brief History of Agile.
Harvard Mark I – IBM ASCC (US)   May 1944
Colossus Mark 2 (UK)                      1 June 1944
Zuse Z4 (Germany)                          March 1945 (or 1948)
ENIAC (US)                                       December 1945 – see ENIAC and the Origins of Computers.

For more on the history of Agile and Computing see:
https://mosaicprojects.com.au/PMKI-ZSY-010.php#Agile


[1] Turing completeness is the ability for a computational model or a system of instructions (computer system, programming language, etc.) that is theoretically capable of being used for any algorithm, regardless of complexity, to find a solution.

ENIAC and the Origins of Computers

In our paper A Brief History of Agile we traced the development of sophisticated calculating machines transitioning into the first computers through the 1940s and the shift from mechanical switching to valves and electronic switches.  A key part of this development, only briefly mentioned in that paper, was ENIAC (Electronic Numerical Integrator And Computer). This computer was designed during World War II to calculate the ballistic trajectories of new gun designs. Engineers John Presper Eckert and John William Mauchly[1] are credited by history for creating ENIAC between 1943 and 1945. When complete, it occupied 167 square meters.

One of the omissions in the historical record is the developments feeding into the creation of ENIAC and a number of parallel but isolated developments. These are discussed in Computers before ENIAC.

Another of the omissions is recognition of the people who programmed ENIAC, various women appear in the preserved photographs for good reasons, they were the ones who programmed the machine.

The term computer goes back to Roman times to refer to a person who performs mathematical calculations. This name was assigned by the U.S. Navy to the people, mainly women, who solve equations by hand to calculate the projectile trajectories from large naval guns. Before ENIAC, ballistic table calculations were done by hand, by 80 female mathematicians working at the University of Pennsylvania.

Six of these computers were chosen to do the same calculation process, using the new ENIAC machine. Mathematicians: Ruth Lichterman Teitelbaum, Frances Bilas Spence, Betty Jean Jennings Bartik, Marlyn Wescoff Meltzer, Betty Snyder Holberton and Kathleen McNulty Mauchly Antonelli were hired by the United States government to design and write the ENIAC programs.

The men designed and built the machine but the computers were the people who had to create the programs, and then program the machine by connecting and disconnecting the wires to the 6000 pegs, to calculate the ballistic tables. They had to program in the binary language (ones/connected and zeros/not connected) without help or manuals, the only documentation was the machine wiring diagrams.

Despite successfully completing this complex task, when the Navy introduced the ENIAC in 1946, at an event where their software worked perfectly, they were not named. These women laid the groundwork for making programming simple and accessible for everyone. They created the first set of routines, the first classes in programming, and the first software applications[2]. Unfortunately, they had to wait decades for recognition of their work.

For more on the history of Agile and Computing see:
https://mosaicprojects.com.au/PMKI-ZSY-010.php#Agile


[1] Mauchly’s work drastically influenced the evolution of computer science during the 1940s and 1950s. He went on to develop the UNIVAC1 computer used by Du Pont in the development of CPM scheduling, see A Brief History of Scheduling: https://mosaicprojects.com.au/PDF_Papers/P042_History_of_Scheduing.pdf (page 8)
 

[2] Special mention is due to Augusta Ada King, Countess of Lovelace, better known as Ada Lovelace, who is credited with being the first person write a computer program to run on Charles Babbage’s Differential Machine: /2022/05/11/what-is-an-algorithm/

The Quest for Control.

In our paper A Brief History of Agile we looked at the evolution of the agile software development approach which briefly covered the ‘Waterfall Dead-end’.  This post looks in more detail at the factors that made Waterfall appear attractive to a large number of competent people although most of these were not involved in developing major software programs.  The people actually involved in managing major software developments were always largely in favor of iterative and incremental development approaches to software development which evolved into Agile some 20+ years later. The focus of this discussion is on the structured ‘Waterfall’ approach to developing software. 

Just for clarification, Waterfall has never been used for other types of projects, and is not a synonym for plan-driven, predictive projects.

In most hard[1] projects, the design has to be significantly progressed before much work can be done, and the work has to be done in a defined sequence. For example, before concreting this house slab, the precise position of every plumbing fitting and wall has to be known (ie, the Architectural plans finalized), so the right pipe of the right size is positioned to be exactly under a fitting or centered within a wall line.  The loadings on the slab also have to be known so the engineering design has correctly calculated the required thicknesses of slab, ground beams, and steel reinforcement. Once all of the necessary design has been done, the various trade works have to be completed in the right sequence and checked, before the slab can be poured. Once the concrete is set, any change is very expensive!  

This is quite different to software projects, which are a class of soft project that is particularly amenable to incorporating change and being developed incrementally and iteratively to minimize test failures and rework. This iterative and incremental approach was being used by most major software development projects prior to the introduction of Waterfall in 1988, so what was it that made the Waterfall concept seem attractive?  To understand this, we need to go back 30 years before the Agile Manifesto was published, to the USA Defense Department of the 1980s:

  • The Cold war was in full swing and the fear of Russia’s perceived lead in technology was paramount. Sputnik 1 had been launched in 1957 and the USA still felt they were playing catch-up.
  • Defense projects were becoming increasingly complex systems of systems and software was a major element in every defense program.
  • Complexity theory was not generally understood, early developments were largely academic theories[2].
  • CPM was dominant and appeared to offer control of workflows.
  • EVM appeared to offer control of management performance.
  • Disciplined cost controls were more than 100 years old.

The three dominant controls paradigms CPM, EVM, and Cost control appeared to offer certainty in most projects, but there seemed to be very little control over the development of software. This was largely true, none of these approaches offer much assistance in managing the creative processes needed to develop a new software program, and the concept of Agile was some 20+ years in the future.  

In this environment, Waterfall appeared to offer the opportunity to bring control software projects by mimicking other hard engineering projects:

  1. The management paradigm most familiar to the decision makers was hard engineering – you need the design of an aircraft or missile to be close to 100% before cutting and riveting metal – computers were new big pieces of ‘metal’ why treat them differently?
  2. For the cost of 1 hour’s computer time, you could buy a couple of months of software engineering time – spending time to get the design right before running the computer nominally made cost-sense. 
  3. The ability to work on-line was only just emerging and computer memory was very limited. Most input was batch loaded using punch cards or tape (paper or magnetic). Therefor concept of: design the code right, load-it-once, and expect success may not have seemed too unrealistic.
The moon-landing software
written by Margaret Hamilton
(c 1969)

The problem was nothing actually worked. Iterative and incremental development means you are looking for errors in small sections of code and use the corrected, working code as the foundation for the next step.  Waterfall failures were ‘big-bang’ with the problems hidden in 1000s of lines of code and often nested, one within another. Finding and fixing errors was a major challenge.

To the US DoD’s credit, they ditched the idea of Waterfall in remarkably quick time for a large department. Waterfall was formally required by the US DoD for a period of 6 years between 1988 and 1994, before and after iterative and incremental approaches were allowed.  

The reasons why the name Waterfall still drags on is covered in two papers:
A Brief History of Agile
How should the different types of project management be described?

Conclusion

While Waterfall was an unmitigated failure, significantly increasing the time and cost needed to develop major software programs, the decision to implement Waterfall by the US DoD is understandable and reasonable in the circumstances.  The current (1980s) software development methods were largely iterative and incremental, and were failing to meet expectations. The new idea of Waterfall offered a solution. it was developed by people with little direct experience of major software development (who were therefore not tarnished with the perceived failures). The advice of people with significant software development experience was ignored – they were already perceived to be failing. The result was 6 years of even worse performance before Waterfall was dropped as a mandated method. The mistake was not listening to the managers with direct experience of developing major software systems. But these same people were the ones managing the development of software that was taking much longer and costing far more than allowed in the budget.  

The actual cause of the perceived failures (cost and time overruns) was unrealistic expectations caused by a lack of understanding of complexity leading to overly optimistic estimates. Everyone could see the problems with the current approach to software development and thought Waterfall was a ‘silver bullet’ to bring control to a difficult management challenge.

Unfortunately, the real issues were in a different area. Underestimating the difficulties involved in software development and a lack of appreciation of the challenges in managing complex projects. This issue is still not fully resolved, even today, the complexities of managing major software developments are underestimated most of the time.

For more on the history of Agile, see: https://mosaicprojects.com.au/PMKI-ZSY-010.php#Agile


[1] The definition of hard and soft projects can be found at: /2023/01/21/hard-v-soft-projects/

[2] For more on complexity see: https://mosaicprojects.com.au/PMKI-ORG-040.php#Overview

Measuring time Updated

The process of measuring time (and just about everything else has got more precise and more confusing.  The weird result following the last (2019) update to the international system of measurements is almost everything from length to weight has time (seconds) as part of the definition.

Our article on Measuring Time has been updated to trace the way time is measured from around 3500 BCE through to the present time.

See the full article at: https://mosaicprojects.com.au/PMKI-ZSY-010.php#Overview

Lies, Damned Lies and Statistics*

While the source of this quotation is questionable how you present data to management can really change the message without distorting the underlying data.  A peer reviewed study of COVID-19 deaths of people over 50 in NSW, Australia, by PLOS One[1] shows:

Anti-Vax: More vaccinated people died than unvaccinated.  A semi-true statement, but omits the fact that more than 95% of the population had at least one vaccination. 

Anti Vax: There is no real need to get vaccinated, the death rate is very low 0.38%.  True, but this average brings in the three vaccinated groups to offset the one unvaccinated group.

Pro-Vax: You are nearly 10 times more likely to die of COVID-19 if you are not vaccinated, compared to if you are fully vaccinated. Also true!

What is the real situation?  Looking at the chart at the top of this post shows a dramatic difference between vaccinated and unvaccinated people. The unvaccinated had a 1.03% chance of dying in the study year! This is much higher than the 0.093% probability of the fully vaccinated.  And by way of comparison, both are much higher than the probability of dying in a vehicle accident in Australia, which is 0.00445% per year.   

The population of NSW is over 8 million, from a public health perspective, the reduction in the demand for beds between vaccinated and unvaccinated is massive – literally 1000s of hospital beds were not required, particularly considering the numbers of people who get sick and live.  Vaccination saved the health system from collapse.

From a personal perspective the decision is more nuanced. The risk of COVID is relatively low and has reduced significantly since this study, but this low risk can be reduced by a factor of 10 by being vaccinated; offset by a one-in-several-million chance of an adverse reaction to the vaccine.  We did the numbers and decided to be fully vaccinated. Others decided to accept the relatively low risk of COVID.

But what does this tell you about project performance data?  Getting accurate data is one thing how you present this information is another.  Far too many controls people stop at the first point – data, and do not think through what message they need to communicate to management, the COVID data supports all three messages above, but the one that really matters is in the graph and the last point. Effectively communicating controls information is a skill in itself, see : Reporting & Communicating Controls Information

*Lies, Damned Lies and Statistics

Neither of the above options seem to be correct according to the University of York: https://www.york.ac.uk/depts/maths/histstat/lies.htm


[1] The full study is at: https://journals.plos.org/plosone/article?id=10.1371%2Fjournal.pone.0299844#pone-0299844-g004

Assessing Delays in Agile & Distributed Projects

Published in the April edition of PM World Journal,  our latest paper, Assessing Delays in Agile & Distributed Projects looks at the challenge of assessing delay and disruption in projects where a CPM schedule is not being used. 

For the last 50 or more years, the internationally recognized approaches to assessing delay and disruption has been based on the forensic assessment of a CPM schedule. However, the methods detailed in RP 29R-03 Forensic Schedule Analysis and the SCL Delay and Disruption Protocol do not offer a viable option for assessing delay and disruption in a wide range of projects including:
– Projects managed using an agile or iterative approach to development.
– Distributed projects where there is no particular requirement for large parts of the work to be completed in any particular order.
– Projects managed using Lean Construction and similar approaches where the forward planning is iterative and short term.

With Agile becoming mainstream, the number of commercial contracts requiring the delivery of a defined output, within specified time and cost parameters, that use management methodologies that do not support CPM scheduling are increasing. But, the law of contract will not change. If the contractor fails to comply with the contract requirements to deliver the defined scope, within the required time the contractor will be in breach of contact and liable to pay damages to the client for that breach.  A number of IT companies have already been successfully sued for failing to comply with their contractual obligations, the risk is real. 

One way to minimize the contractor’s exposure to damages, is to claim extensions of time to the contract for delay events, particularly delays caused by the client.  What has been missing in the literature to this point in time is a set of processes for calculating the delay in the absence of a viable CPM schedule. The good news is the law was quite capable of sorting out these questions before CPM was invented, what’s missing in the current situation is an approach that can be used to prove a delay in agile or distributed projects.

This paper defines a set of delay assessment methods that will provide a robust and defensible assessment of delay in this class of project where there simply is no viable CPM schedule. The effect of any intervening event is considered in terms of the delay and disruption caused by the loss of resource efficiency, rather than its effect on a predetermined, arbitrary, sequence of activities.

Download Assessing Delays in Agile & Distributed Projects and supporting materials from: https://mosaicprojects.com.au/PMKI-ITC-020.php#ADD

Understanding the Iron Triangle and Projects

The concept of the iron triangle as a framework for managing projects has long past its use-by date, almost everyone, including PMI, recognize the challenges faced by every project manage are multi-faceted, and multi-dimensional – three elements are not enough. But dig back into the original concepts behind the triangle, and you do uncover a useful framework for defining a project, and separating project management from general management.

The Iron Triangle

The origin of the project management triangle is accredited to Dr. Martin Barnes. In 1969[1], he developed the triangle as part of his course ‘Time and Money in Contract Control’ and labelled three tensions that needed to be balanced against each other in a project: time, cost, and output (the correct scope at the correct quality). The invention of the iron triangle is discussed in more depth in The Origins of Modern Project Management.

The concept of the triangle was widely accepted, and over the years different versions emerged:

  • The time and cost components remained unchanged, but the ‘output’ became variously scope or quality and then to scope being one of the three components and quality in the middle of the triangle (or visa versa). These changes are semantic:
    • you have not delivered scope unless the scope complies with all contractual obligations including quality requirements
    • achieving quality required delivering 100% of what is required by the client.

  • The shift from tensions to constraints changes the concept completely. A constraint is something that cannot be changed, or requires permission to change. Tensions vary based on external influences, the three tensions can work together or against each other.  
     
  • The introduction of iron!  It seems likely, the iron triangle, based on the concept of the iron cage from Max Weber’s 1905 book The Protestant Ethic and the Spirit of Capitalism’s English translation (1930). The iron cage traps individuals in systems based purely on goal efficiency, rational calculation, and control[2].

So, while the concept of the iron triangle and/or triple constraint has been consigned to history, the original concept of the triangle, as a balance between the three elements that are always present in a project still has value.

Defining a project

Understanding precisely what work is a project, and what is operational (or some other forms of working) is becoming increasingly important as various methodologies such as Lean and Agile span between the operations and project domains. 

Some of the parameters used to define or categorize projects include:

There are many other ways to categorize projects, some of which are discussed in the papers at: https://mosaicprojects.com.au/PMKI-ORG-035.php#Class. But these classifications do not really provide a concise definition of a project. And, while there are many different definitions, we consider the best definition of a project to be: 

Project:  A temporary organization established to accomplish an objective, under the leadership of a person (or people) nominated to fulfil the role of project manager[3].

The two key elements being:

  1. The project is temporary, the project delivery team or organization closes and its people reallocated, or terminated, when the objective is achieved, and

  2. The project is established to accomplish an objective for a client which may be internal or external to the delivery organization.

The concept of a project being temporary is straightforward (even though it is often ignored). Departments that are set up and funded to maintain and enhance plant, equipment and/or software systems on an ongoing basis are not projects, and neither is most of their work. We discussed this several years back in De-Projectizing IT Maintenance, and the concept seems to be becoming mainstream with the concept of ‘flow’ being introduced to Disciplined Agile.

The value of the project management triangle is identifying the three mandatory elements needed to describe the client’s objective. There are usually a number of additional elements but if any these three are not present, the work is not a project:

  1. The expected outcome is understood. The project is commissioned to deliver a change to the status quo. This may be something new, altered, and/or removed; and the outcome may be tangible, intangible or a combination.  The outcome does not need to be precisely defined to start a project, and may evolve as the project progresses, but the key element is there is an understood objective the project is working to achieve.  
      
  2. There is an anticipated completion time. This may be fixed by a contract completion date, or a softer target with some flexibility.  However, if there are no limitations on when the work is expected to finish (or it is assumed to be ongoing), you are on a journey, not working on a project. 
     
  3. There is an anticipated budget to accomplish the work. Again, the budget may be fixed by a contract, or a softer target with some flexibility.  The budget is the client view of the amount they expect to pay to receive the objective. The actual cost of accomplishing the work may be higher or lower and who benefits or pays depends on the contractual arrangements. 

Conclusion

Understanding the difference between project work and non-project work is important.  Project overheads are useful when managing the challenges of delivering a project, including dealing with scope creep, cost overruns, schedule slippage, and change in general. The mission of the project team is to deliver the agreed objective as efficiently as possible.

The objective of an operational department is to maintain and enhance the organizational assets under its control. This typically needs different approaches and focuses on a wider range of outcomes.  Many techniques are common to both operational and project work, including various agile methodologies and lean, and many management traits such as agility are desirable across the spectrum.  The difference is understanding the overarching management objectives, and tailoring processes to suite.  

For more on project definition and classification see:
https://mosaicprojects.com.au/PMKI-ORG-035.php#Overview


[1] We published and widely circulated this claim after a meeting with Dr. Barns in 2005 at his home in Cambridge. So far no one has suggested an alternative origin.  

[2] For more on the work of Max Weber, see The Origins of Modern Management: https://mosaicprojects.com.au/PDF_Papers/P050_Origins_of_Modern_Management.pdf

[3] The basis of this definition is described in Project Fact or fiction: https://mosaicprojects.com.au/PDF_Papers/P007_Project_Fact.pdf 

Waterfall is Dead

The PMI 2024 Pulse of the Profession has introduced a framework for categorizing projects based on the management approach being used of: Predictive – Hybrid – Agile.  If generally adopted, this framework will at long last kill of the notion of waterfall as a project delivery methodology.

As shown in our historical research The History of Agile, Lean, and Allied Concepts, the idea of waterfall as a project delivery methodology was a mistake, and its value as a software development approach was limited.

The PMI framework has some problems but the predictive project delivery paradigm is described as focused on schedule, scope, and budget. The projects tend to use a phase-based approach and are plan driven.  This describes most hard projects and many soft projects that are not using an unconstrained agile approach.

For a detailed review of the PMI 2024 Pulse of the Profession report, and how the classification system works see How should the different types of project management be described?, download from: https://mosaicprojects.com.au/Mag_Articles/AA026_How_should_different_types_of_PM_be_described.pdf

For more on project classification see: https://mosaicprojects.com.au/PMKI-ORG-035.php#Class

WPM for Lean & Distributed Projects

The core concept underlaying the Critical Path Method (CPM) is there is one best way to undertake the work of the project and this can be accurately modelled in the CPM schedule. This premise does not hold for either distributed projects, or projects applying Lean Construction management. These two types of projects differ, lean is a management choice, whereas distributed projects are a physical fact:
–  Distributed projects are ones where the physical distribution of the elements to be constructed means significant amounts of the work can be done in any sequence and changing the sequence when needed is relatively easy.
–  Lean construction is a project delivery process that uses Lean methods to maximize stakeholder value and reduce waste by emphasizing collaboration between everyone involved in a project. To achieve this the work is planned and re-planned as needed by the project team focusing on optimising production.

In both cases, the flexibility in the way the detailed work is performed and the relative ease with which the sequence can be changed means CPM is ineffective as a predictor of status and completion.

Our latest article WPM for Lean & Distributed Projects looks at how Work Performance Management (WPM) can be used to assess both the current status and the projected completion for these types of project, regardless of the number of sequence changes made to the overall plan.

Download WPM for Lean & Distributed Projects from: https://mosaicprojects.com.au/PMKI-SCH-041.php#WPM-Dist

See more on WPM as a valuable tool to add to you project controls system: https://mosaicprojects.com.au/PMKI-SCH-041.php#Overview

Ethics and Governance in Action


The best governed organizations will have ethical failures, even criminal activities, occurring from time to time. When an organization employs 1000s of people there will always be some who make mistakes or choose to do the wrong thing.  The difference between a well governed organization with a strong ethical framework and the others is how they deal with the issues.

The Bad

Over the last few months there has been a lot of commentary on major ethical failures by some of the ‘big 4’ accountancy firms (see: The major news story everyone missed: KPMG hit with record fine for their role in the Carillion Collapse). With a common theme being attempts by the partners running these organizations to minimize their responsibility and deflect blame. As a consequence, there have been record fines imposed on KPMG and massive long-term reputational damage caused to PWC by the Australian Tax Office scandal.

The Good

The contrast with the way the Jacobs Group (Australia) Pty Ltd (Jacobs Group) has managed an equally damaging occurrence could not be starker! Jacobs Group had pleaded guilty to three counts of conspiring to cause bribes to be offered to foreign public officials, contrary to provisions of the Criminal Code Act 1995 (Cth). But, the exemplary way this issue has been managed is an example for all.

Offering bribes to foreign public officials has been a criminal offence in Australia since 1995, and the Crimes Legislation Amendment (Combatting Foreign Bribery) Bill 2023 has just passed into law significantly increasing penalties.

Despite this, between 2000 and 2012, SKM was involved in two conspiracies in the Philippines and Vietnam. Both conspiracies involved employees of SKM’s overseas development assistance businesses (the SODA business unit) paying bribes to foreign public officials in order to facilitate the awarding of public infrastructure project contracts to SKM. SKM employees embarked on a complex scheme to conceal the bribes by making payments to third party companies, and receiving fake invoices for services which were not in fact rendered. The conduct was known to and approved by senior persons at SKM, although concealed from the company more widely.

Jacobs Group acquired SKM in 2013, after the conduct had ceased. During the vendor due diligence processes, the conduct came to the attention of persons outside those involved in the offending, and the company’s external lawyers.

Despite the lawyers findings being subject to legal privilege, and the very remote possibility of the Australian Authorities discovering the crime, the non-conflicted directors unanimously voted to self-report the findings to the Australian Federal Police (AFP), to waive legal privilege in the draft report, and to make it available to the AFP. The company also reported the findings of its investigation to a number of other authorities, including the World Bank, Asian Development Bank, AusAid, and ASIC.

The company and a number of individuals were charged in 2018, and Jacobs pleaded guilty to three counts of conspiring to cause bribes to be offered to foreign public officials. The matter only came to our attention because of a recent High Court ruling dealing with technical issues around the calculation of the fine to be paid by Jacobs.

When Justice Adamson in the New South Wales Supreme Court sentenced the company on 9 June 2021. She found that while each of the offences committed fell within the mid-range of objective seriousness for an offence, this was mitigated by the fact that the company had self-reported the offending to authorities, and that the self-reporting was motivated by remorse and contrition rather than fear of discovery. The sentencing judge also found that the conduct was not widespread, and effectively limited to the SODA business unit. She accepted evidence from the AFP that it was unlikely to have become aware of the conduct absent the company’s self-reporting, and that the company’s post offence conduct was “best practice” and “of the highest quality”.

Based on these findings the amount of the fine to be paid by Jacobs is likely to be in the region of $3 million – a massive discount from the potential maximum that, based on the High Court decision, is likely to exceed $32 million.

Lessons on Governance and Ethics

The approach taken by Jacobs Group, following the identification of potential criminal conduct, is a useful guide as to how an ethical organization works:

  1. The prompt retention of independent external lawyers to investigate suspected instances of criminal misconduct.
  2. The decisions of the board of directors to self-report the conduct to authorities and provide ongoing assistance and cooperation to law enforcement and prosecutorial authorities, notwithstanding the risk of criminal sanction.
  3. Committing to remediation steps to address the conduct (and seeking to prevent any repeat of it), including by overhauling relevant policies and procedures and making appropriate operational changes including:
  • suspending and then terminating relevant individual employees who had participated in the conduct;
  • operational changes to management and oversight of the SODA business unit that had been involved in the conduct, and changing approval processes for all payments by that unit;
  • introducing a new Code of Conduct which explicitly prohibited the offering of inducements to public officials;
  • introducing a requirement for the completion of a bribery and corruption risk assessment before committing to new projects;
  • upgrading various internal policies, including the company’s whistleblower, donations and gifts and entertainment policies. It also introduced new policies which discouraged the use of agents, and required the screening of all new suppliers and sub-consultants for bribery and corruption risk. The company also engaged an independent monitor to review the changes made to its policies;
  • updating and expanding existing bribery and corruption training programs for staff; and
  • modifying internal audit practices to more closely scrutinize non-financial risks, such as bribery and corruption.

One definition of ethical behaviour is doing the right thing when no one is looking. The contrast between Jacobs and KPMG’s outcomes is a lesson worth remembering.

For more on governance and organizational ethics see: https://mosaicprojects.com.au/PMKI-ORG-010.php#Overview