Monday

Resourcing Schedules – A Conundrum 2

Following on from comments to my post ‘Resourcing Schedules – A Conundrum’  there are still some basic problems to resolve.

As the commentators suggest, KISS is certainly an important aspect of effective resource planning: ie, planning resources at an appropriate level of detail for real management needs. But the basic issues remain; you cannot rely on a scheduling tool to optimise the duration of a resource levelled schedule.

We use the basic network below in our Scheduling courses (download the network – or – see more on our scheduling training)

Network for Analysis (download a PDF from the link above)

No software I know of gets this one ‘right’.

When you play with the schedule, the answer to achieving the shortest overall duration is starting the critical resource (Resource 3) as soon as possible.

To achieve this Resource 2 has to focus 100% on completing Task B as quickly as possible BUT, Task C is on the Time Analysis critical path not Task B and 99% of the time software picks C to start before B.

This is not a new problem, a current paper by Kastor and Sirakoulis International Journal of Project Management, Vol 27, Issue 5 (July) p493 has the results of a series of tests – Primavera P6 achieved a duration of 709, Microsoft Project 744 and Open Workbench 863. Play with the resource leveling settings in P6 and its results are 709, 744, 823, 893 – a huge range of variation and the best option (P6) was still some 46% longer than the time analysis result . Other analysis reported in the 1970s and 80s showed similar variability of outcomes.

As Prof. George Box stated – All models are wrong, some are useful… the important question is how wrong does the model have to be before it is no longer be useful.

Computer driven resource schedules are never optimum, done well they are close enough to be useful (but this needs a good operator + a good tool). And good scheduling practice requires knowing when near enough is good enough so that you can use the insights and knowledge gained to get on with running the project. Remembering even the most perfectly balanced resource schedule will fall out of balance at the first update…..

How you encapsulate this in a guide to good scheduling practice is altogether a different question. I would certainly appreciate any additional feedback.

9 responses to “Resourcing Schedules – A Conundrum 2

  1. Pat,

    This approach, although common, misses a critical concept.

    Arrange the work details to match the available resources. That is partition the work tasks so as to match the skill sets. This is a system engineering paradigm of constructing the “cut set” of dependencies between the various elements.

    The Design Structure Matrix algorithm used in aerospace, defense, BMW, GE and other “complex” development domains provides the CAD-like tool needed to address these resource allocation problem.

    With the decomposed cut set, the resources are the dependencies. DSM is then used to define the interfaces between the dependent elements and isolate the over laps.

  2. Pat,

    If you add one resource Tester2 or convert the Evaluator to a Test2, you take 3 weeks out of the leveled duration (assuming the numbers in the boxes are word days).

    These types of “examples” are over constrinated problems – possibly useful for illustrative purposes, but for practical examples, multi-roles, or dropping a role – since the evaluator is finished at the end of Task C.

    So to keep the head count constant. Have the Evaluator leave the project after C, and hire a new tester. Or as I suggested, use the evaluator as a tester.

    • Both of your comments are of interest Glen.

      Certainly redesigning the network tasks and/or changing the mix, skills or quantities of the available resources would solve this schedule.

      Your initial comment of shifting from a task based focus to a resource work focus is also really interesting. This is back to the idea’s Kelley and Walker were initially focused on in 1957 around optimising productivity and the time/cost trade-off.

      The challenge is CPM as it has developed and as it is written into most contracts is a task/time focused model not a resource/time model. Resource levelling is a separate process after the CPA ‘time analysis’ and demonstrably not efficient without manual intervention and the application of new ideas to change the time based logic and/or common sense.

      The logic in this little network is real, sign offs were needed to move to the next stage (a fail at any point would cause the option to be abandoned). And the two sets of tests were completely independent but there probably is flexibility in the people. The question is how do you find this type of conundrum in the middle of several hundred tasks in a real schedule??

  3. Pat,

    No doubt it is a real schedule. But if the Evaluator is replaced (with no increase in net head count), the schedule is reduced. Even with the independent testing.

    You identify the issue within the collective work package process through DSM.

  4. Glen,

    Would you mind adding a series of posts to your site or pointing to a DSM tutorial that uses DSM in the manner you suggest?

    Do you think Monte Carlo Analysis has a role here? I usually see it in the context of cost/duration not so much with a resource focus.

    • Hi Shawn,

      Glen’s blog is Herding Cats, DSM is not a tool I have used so I can’t help.

      Monte Carlo is a useful tool for developing insights into the likely range of cost or time outcomes and establishing appropriate contingencies (generally at the macro level). You can’t use it for the day to day running of a project though and because of the intrinsic variability it’s not much help in resource levelling.

      Pat

  5. Hi Pat, I’m in a catch-up mode and just now came to read your post. I agree with your comments in principle and would only add the following: The biggest issue I see in existing scheduling tools is not their inability to optimize resource scheduling but rather their inability to highlight inherent risks associated with their focus on the CPM without providing any warning to the risks associated with the availability or otherwise of critical resources. I am concerned that many PM’s interpret the tool’s output literally and inadvertently introduce unaccepted risk into their project. The obvious solution around it is the rigorous use of MCS, but an easier, and possibly more cost effective solution would be an inclusion of near-critical-path identification, based not just on schedule but also resource dependencies.

  6. Shim,

    Near Critical Path assessment is an integral element of MCS tools. But since there are useful implementations of MCS that are free and at times embedded in MSFT project, the near CP processes can be done for free as well.

    For simple assessments all the complex issues around probabilistic sampling and other over complex process of tolls like Risk+ and @Risk For Project can be skipped in the beginning.

    Care needs to be taken though, since on any real project the CP and NCP paths can change weekly. The CP and NCP should be conversation starters, not solution to the risk issues.

Leave a comment