Research Proposal Assignment

Chapter 11
Project risk management
Life “looks just a lile more mathematical and regular than it is; its exactitude is obvious, but its inexactitude is hidden; its wildness lies in
wait”
—G. K. Chesterton
1
Every project is risky, meaning project outcomes won’t necessarily turn out as planned. e project could significantly overrun
cost or sedule targets, or the end-item may fall short of requirements. Project outcomes result from many things, including
some that are unpredictable and uncontrollable. Risk level is associated with the certainty that outcomes will be as expected.
High-certainty outcomes have low risk; low-certainty outcomes have high risk. Certainty derives from knowledge and
experience gained in prior projects, as well as management’s ability to mitigate anticipated risks and respond to newly
emerging ones.
2
11.1 Risk concepts
Risk is a function of the uniqueness of a project and the experience of the project team. When activities are routine or have
been performed many times before, managers can anticipate the risks and manipulate the system design and project plan to
aieve the desired outcomes. But when the work is unique or the team inexperienced, the outcomes are less certain, whi
makes it difficult to anticipate problems or know how to deal with them. Even routine projects can be risky due to factors that
newly arise or are beyond anyone’s control.
e notion of project risk involves two concepts:
1. e likelihood that some problematic event will occur.
2. e impact of the event if it does occur.
Risk is a joint function of the two,
Risk = f(likelihood,impact)
A project will ordinarily be considered “risky” whenever at least one—the likelihood or the impact—is large. For example, it
will be considered risky when the potential impact is human fatality or massive financial loss, even if the likelihood is small.
Risk can also mean opportunities, su as potential greater rewards, savings, or benefits. Typically, however, risk management
focuses on negative consequences.
See Chapter 10
See Chapter 19
Figure 11.1
Risk management elements and process.
Many managers are accustomed to dealing with facts, figures, and hard numbers, so they find the concept of risk hard to
deal with. Faced with uncertainty, they prefer to ignore problems, though, of course, that doesn’t make the problems go away.
Although risk cannot be eliminated, it can be reduced and plans readied in case things go wrong; this is the purpose of the
risk management process, shown in Figure 11.1.
11.2 Risk identification
You can only manage things you are aware of. us, risk management begins with identifying the risks and predicting their
consequences.
Risk in projects is sometimes referred to as the risk of failure, whi implies that a project might fall short of sedule,
budget, or tenical performance goals by a significant margin.
Among ways to identify project risks, one is to proceed according to project ronology—that is, to look at
the phases and stages in the life cycle (feasibility, contract negotiation, system concept, definition, etc.) and
identify the risks in ea. Ea phase presents unique hurdles and problems that could halt the project
immediately or lead to later failure (as illustrated in Chapter 10, Table 10.2). In product development projects,
the risk of failure is highest in the early stage of preliminary design and diminishes thereaer. Some risks remain throughout,
su as potential loss of funding or management commitment.
Risk can also be identified by type of work or tenical function, su as engineering risks associated with product
reliability and maintainability or production risks associated with the manufacturability of a product or the availability of raw
materials.
Risk identification starts in the conception phase and focuses on those risk factors that would make the project difficult or
destined to fail. Factors that contribute to high risk include:
Using an unusual approa.
Aempting to both develop a new system and advance tenology at the same time.
Developing and testing new equipment, systems, or procedures.
Operating in an unpredictable or variable environment.
High-risk factors must be studied and well understood before the project can be approved and funds can be
commied. Risks identified in the conception phase are oen broadly defined and subjectively assessed, though
they might also be analyzed using methods discussed later. When multiple competing projects are under
consideration, an assessment is performed to decide whi of them, based upon tradeoffs of the relative risks,
benefits, and available funding, is best.
3 Comparing and selecting projects based upon criteria su as risk is discussed in
Chapter 19.
Risk sources
Any uncertain factor that can influence the outcome of a project is a risk source or risk hazard. Identifying risk sources
involves learning as mu as possible about potential things known to go wrong and the outcome for ea, as well as trying to
identify things not already known—the “unknown unknowns.”
Risk sources in projects can be classified as internal risks and external risks.
Internal sources
ese are sources of risk that originate inside the project and over whi the project managers and stakeholders have some
measure of control. ey fall into three main categories: market risk, assumptions risk, and tenical risk.
Market risk is the risk of not fulfilling market needs or the requirements of particular customers. Sources of market risk
include:
Failure to adequately define the market or customer needs and requirements.
Failure to identify anging needs and requirements.
Failure to identify products newly introduced by competitors.
Market risk stems from the developer misreading the market environment. It can be reduced by working closely with the
customer; thoroughly defining needs and requirements early in the project; closely monitoring trends and developments among
markets, customers, and competitors; and updating requirements as needed throughout the project.
Assumptions risk is risk associated with the numerous implicit or explicit assumptions made in feasibility studies and
project plans during project conception and definition. Faulty, inaccurate, or invalid assumptions put the project in jeopardy of
not meeting time, cost, or tenical requirements or resulting in unanticipated and harmful side effects.
Tenical risk is the risk of encountering tenical problems in project work or with the end-item. (Sometimes these risks
are listed in special categories—sedule risks being those that would cause delays, cost risks those that would lead to overruns,
and so on.) Tenical risk is high in projects that involve new and untried tenical applications but low in projects that involve
familiar activities and tenologies.
One approa to expressing tenical risk is to rate the project primary process or end-item as being high, medium, or low
according to the following features:
4
Maturity. How experienced or knowledgeable is the project team in the project tenology? An end-item or process that
takes advantage of existing experience and knowledge is less risky than one that is innovative, untried, or cuing edge.
Complexity. How many steps, elements, or components are in the product or process, and how tightly are they interrelated?
Ceteris paribus, an end-item or process with numerous, interrelated steps or components is riskier than one with fewer steps
and simpler relationships. ality. How producible, reliable, and testable is the end-item or process? In general, an end-item or process that has been
produced and is reliable and/or testable is less risky than one that has yet to be produced or has unknown reliability or
testability.
Concurrency or dependency. To what extent do multiple dependent activities in the project overlap? Activities performed in
sequence with no overlap are less risky than activities that are overlapped (i.e. the discrete-staged approa is less risky than
fast-traing).
A subcategory of tenical risks are health, safety, and environmental risks; these include hazards to project workers, the larger
society, and the ecology as a consequence of the project. ese risks stem from short-term hazards due to working conditions,
procedures, and materials used in the project and from long-term hazards from the functioning, operation, or mere existence of
the project end-item.
Chapters 10 and 18
Chapters 16, 18 and 20
External sources
ese are risk sources that originate from outside the project and over whi project managers oen have limited or no ability
to control. ey include:
government regulations
competitors’ actions
interest rates and exange rates
senior management or customer decisions regarding project, priority staffing, or budgets
customer needs and behavior
supplier/subcontractor relations and business failures
local physical environment (weather, terrain, infrastructure)
labor availability (strikes and walkouts)
material or labor resources (shortages)
customer or subcontractor control over project work and resources
local culture
adverse impacts of climate ange.
In general, the risks associated with most of these categories tends to increase for international
projects, the topic of Chapter 20.
Another source of risk is stakeholders. By definition, stakeholders are affected by the project, and
many of them are able to influence project outcomes—both positively and negatively. Identifying and
working with stakeholders is discussed in Chapters 16 and 18.
Identification teniques
Project risk sources (hereaer just called “risks”) are identified in many ways; principal among them are project analogy,
elists, WBS analysis, process flowarts, project networks, cause-effect diagramming, brainstorming, and the Delphi
tenique.
Project analogy
e project analogy method involves scrutinizing the records, post-completion summary reports, and project team members’
recollections of earlier analogous projects to identify risks in upcoming projects. e more complete,
accurate, and well catalogued the documentation of past projects and the beer people’s memories, the
more useful are these for identifying risks. Beyond just investigating past projects, the method requires
identifying ones that are similar in significant ways to the project for whi risks are being assessed.
Knowledge management methods, described in Chapters 10 and 18, promote learning from past projects that can help
anticipate risks in new ones.
Chelists
Documentation from prior projects is also used to create elists of risk sources in projects. A elist is initially based upon
the experiences from past projects and is updated as new experience is gained from recent projects. Risk elists can pertain
to the project as a whole or to specific phases, work paages, or tasks within the project.
To illustrate, the elist in Table 11.1 shows the risk severity associated with three categories of risk sources: (1) status of
implementation plan, (2) number of module interfaces, and (3) percentage of components requiring testing. Suppose, for
example, an upcoming project will use a standard plan, have eight module interfaces, and test 20 percent of the system
components. us, the project will be rated as low, low, and medium, respectively, for the three risk sources.
See Chapter 10
As experience grows with completed projects, the elists are expanded and updated. e more experience a manager or
company gains with projects, the more they learn about the risks and the more comprehensive they can make the elists.
While a elist cannot guarantee that all significant risk sources in a project will be identified, it does help ensure that the
important known ones won’t be overlooked.
Table 11.1 Risk elist.
Risk Sources Risk Severity
Status of implementation plan
1. No plan required None
2. Standard plan, existing, complete Low
3. Plan being prepared Medium
4. Plan not started High
Number of interfaces between modules 1. Less than 5 None
2. 5-10 Low
3. 11-20 Medium
4. More than 20 High
Percent of system components requiring tests 1. 0-1 None
2. 2-10 Low
3. 11-30 Medium
4. Over 30 High
A disadvantage of risk elists is that people might look at only the risks listed and not consider any not on the list.
Chelists therefore need to be supplemented by other methods.
Work breakdown structure
Risks can be identified using the WBS. Ea work paage is scrutinized for potential tenical hurdles or problems with
managers, customers, suppliers, equipment, or resource availability. It is assessed for internal risks in terms of, for example,
complexity, maturity, quality, and concurrency, and for external risks, for example, relying on a subcontractor to manage the
work paage. e risk of every work paage is rated as, for example, high, medium, or low.
Process flowart
Project risks can also be identified from process flowarts that illustrate the steps, procedures, and flows between tasks and
activities in a project or work paage process. Examining the flowart enables the pinpointing of potential trouble spots and
risky areas.
Failure mode and effects analysis and hazard and operability study
e failure mode and effects analysis (FMEA) method (see Chapter 10) can be used to identify conditions leading
to system failure and thus subjecting the project, people, and the environment to risk. A related method called
HAZOP—hazard and operability study—is a rigorous investigation of a system to assess what happens when it
starts up, shuts down, or encounters problems. e method focuses on the system design and possible errors,
omissions, or inherent hazards. Both FMEA and HAZOP are widely used in tenical projects—HAZOP most commonly in
process industries and infrastructure projects.
Chapters 7 and 8
See Chapter 10
Project networks and convergence points
Similarly, risks can be identified through scrutiny of the precedence relationships and concurrent or sequential
seduling of activities in project networks (Chapters 7 and 8). Risk sometimes increases at merge points in the
network where work performed by different teams comes together and must be integrated; sometimes only
then do problems become evident, su as subsystems produced by two teams not mating up or functioning
correctly. e risk of project delay from this so-called “merge-point bias” is discussed in Chapter 8.
Brainstorming and cause-and-effect diagram
Risks can be identified from the collective experiences of project team members who participate in a brainstorming session to
share opinions about possible risk sources and record them on a cause-and-effect diagram, as shown in Figure 11.2.
Brainstorming and CE diagrams are used in two ways: (1) given an identified, potential outcome (effect), to identify the
potential causes (sources); (2) given a risk source (cause), to identify the outcomes that might ensue (effects). Figure 11.2
illustrates the first use: it shows potential sources leading to the effect of “completion delay.”
Figure 11.2
Cause-and-effect diagram.
e diagram in Figure 11.2 is divided into the generic risk categories of soware, hardware, and so on (other
categories are possible). Ea category is subdivided into more fundamental sources of risk; for example, the
category “staff’ includes the risk of “staff shortage,” whi could be caused by “inability to hire and train
additional staff.” CE and related analysis teniques are further discussed in Chapter 10.
To encourage original thinking and a comprehensive list of possible risks, the risks should not be assessed during
brainstorming. Any early mention that a risk is “unrealistic” or “impossible” might lead to some very important risks being
discarded. Hence, no risks should be assessed until aer the risk list has been compiled.
Risks related to the project end-item may also be discovered during formal design reviews, whi are discussed in Chapter
10.
Delphi tenique
See Chapter 13
e term Delphi refers to a group survey tenique for combining the opinions of several people to develop a single judgment.
e tenique comprises a series of structured questions and feedba reports. Ea respondent is given a series of questions
(e.g. what are the five most significant risks in this project?), to whi he responds giving his opinions and reasons. e
responses of everyone surveyed are summarized in one report that is given to everyone. Seeing others’ opinions, respondents
then have the opportunity to modify their own opinions. Because the wrien responses are anonymous, no one feels pressured
to conform to others’ opinions. If people ange their opinions, they must explain the reasons why; if they don’t, they must also
explain why. e process continues until the group reaes a collective opinion. Studies have proven the tenique to be an
effective way of reaing consensus.
5
Risk symptoms and triggers
As the sources and outcomes of ea risk are identified, so are its symptoms, whi are visible indicators or warning signs that
the risk is materializing; these serve as a trigger to initiate counteractions or contingencies to mitigate or combat the risk. For
example, for the risk “failure to meet tenical requirements,” a symptom might be “failure of component X during test”;
should that symptom be observed, it would trigger the action “move to design plan B.”
11.3 Risk assessment
Risks are ubiquitous but only the notable or significant ones require aention. If a risk and its consequences are significant,
ways must be found to avoid or reduce the risk to an acceptable level. What is considered “acceptable” depends on the risk
tolerance of project stakeholders. Oen, managers with experience avoid risks (are risk averse) because they understand the
risks and their consequences, whereas managers with less experience take risks (are risk tolerant) because they are ignorant of
the consequences of the risks.
What is considered “significant” depends on the risk likelihood, the risk impact, and the risk consequence.
Risk likelihood
Risk likelihood is the probability that a risk factor will actually materialize.
6
It can be expressed as a numerical
value between 1.0 (certain to happen) and 0 (impossible) or as a qualitative, ordinal rating su as high, medium,
or low. (Interestingly, if a risk has probability of 1.0—meaning it is certain to happen— See Chapter 13 it is
considered not a risk but as an “issue” to be dealt with, as discussed in Chapter 13.) Numerical values and
qualitative ratings are sometimes used interangeably. Table 11.2 shows an example: when, for instance, someone says, “the
likelihood of this risk is low,” that means the probability of it happening, according to the table, is 20 percent or less.
But Table 11.2 is an illustration only, and the association between qualitative ratings and numerical values is subjective and
depends on the experience of the project team and the risk tolerance of stakeholders. For example, Table 11.2 might have been
created for a project with high economic stakes, in whi case “high risk” equates to a numerical likelihood of greater than 50
percent. In a project with low economic stakes, “high risk” might equate to a numerical likelihood of 75 percent or more.
People oen have difficulty agreeing on the numerical likelihood value for a given qualitative rating and vice versa, even given
the same information or experience; this is described later in Example 11.2.
Table 11.2 Risk likelihood: qualitative ratings for quantitative values.
alitative Numerical
Low 0-0.20
Medium 0.21-0.50
High 0.51-1.00
Table 11.3 Likelihoods for different sources of failure.
Likelihood MH M
S
C
H
C
S
D
0.1 (low) Existing Existing Simple Simple Independent
design design
0.3 (minor) Minor redesign Minor redesign
Minor
complexity
Minor
complexity
Sedule dependent on
existing system
0.5
(moderate)
Major ange feasible Major ange feasible
Moderate
complexity
Moderate
Performance dependent
on existing system
0.7
(significant)
Complex design;
existing tenology
New, but similar to
existing soware
Significant
complexity
Significant
complexity
Sedule dependent on
new system
0.9 (high)
State of the art; lile
resear done
State of the art; never
done
Extreme
complexity
Extreme
complexity
Performance dependent
on new system
*MS
, failure likelihood due to soware immaturity; C
S
, failure likelihood due to soware complexity; MH
, failure likelihood due to hardware
immaturity; C
H
, failure likelihood due to hardware complexity; D, failure likelihood due to dependency on external factors.
Note: “failure” refers to not meeting tenical goals.
Adapted from Roetzheim W. Structured Computer Project Management. Upper Saddle River, NJ: Prentice Hall; 1988, pp. 23-26.
Table 11.3 is a elist for five potential sources of failure in computer systems projects and associated numerical
likelihoods.
7 Looking at the Ms
column, the likelihood of failure for existing soware is low, but for state-of-the-art soware, it
is high. To repeat, the likelihood values are illustrative and would be tailored to ea project depending on the experience and
opinion of stakeholders. A likelihood estimate based on the opinions of several individuals (assuming all have relevant
experience) is usually more valid than one based on only a few.
When a project has multiple independent risk sources (as is common), they can be combined into a single composite
likelihood factor, or CLF. Using the sources in Table 11.3, the CLF can be computed as a weighted average,
CLF = (W1) MH
+ (W2)C + (W3)MS
+ (W 4)C + (W5)D (11.1)
where W1, W2, W3, W4, and W5 ea have values 0 through 1.0 and sum to 1.0. is is illustrated in Example 11.1.
Example 11.1: Computation of Composite Likelihood Factor
e ROSEBUD project involves development of hardware and soware with aracteristics as follows: the hardware is
existing and of minor complexity; the soware will be developed as a minor redesign of current soware and is of moderate
complexity; the performance of the overall system will depend on how well it can be integrated into another, larger system.
us, from Table 11.3, M
H
= 0.1, C
H
= 0.3, M
S
= 0.5, C
S
= 0.3, and D = 0.5. If all sources are rated equally at 0.2, then
CLF = (W1)MH
+ (W2)CH
+ (W3)Ms
+ (W 4)Cs
+ (W5)D
e application of this CLF will be discussed shortly.
Note that the computation in equation (11.1) assumes that the risk sources are independent. If they are not—if, for example,
failure due to soware complexity depends on failure due to hardware complexity—then the two likelihoods cannot be
summed. e sources would have to be subjectively combined into one source (e.g., “failure due to a combination of soware
and hardware complexity”) and a single likelihood value assigned based on judgment.
One way to show the interdependency of risk factors is with an influence diagram. An example is Figure 11.3.
8 To construct
the diagram, start with a list of previously identified risks (e.g. from Figure 11.2). en look at ea risk and ask whether it is
influenced by, or has influence on, any of the other risks. If so, draw them as in Figure 11.3, using arrows to show the direction
of influence of related risks to the direction of influence (e.g. S.1 influences S.2 and I.2). To minimize confusion, keep the
number of risks on the diagram small, about 15 or fewer. Risks with the most connections are the most important; in Figure
11.3, these would be risks I.2, S.1, and S.2; ea is influenced by other risks, whi increases the failure likelihood.
See Chapter 4
Figure 11.3
Influence diagram.
Risk likelihood also is affected by the future: ceteris paribus, activities planned further in the future are more
risky (have greater likelihood of failure) than those closer at hand.
9 is is because activities farther in the future
have greater ances of being influenced by unknowns. e greater risks and more unknowns over phases later in
the project are reasons for phased (rolling wave) project planning, discussed in Chapter 4. As a project enters the
execution phase and moves toward completion, the unknowns diminish and so does likelihood of failure. But there is a tradeoff:
as the project progresses and the risks diminish, the stakes in the project—the amount of human and financial capital sunk into
it—increase. is means that losses from failure incurred later in the project far exceed losses if incurred earlier.
Risk impact
What would happen if a risk hazard materialized? e result would be a risk impact. A poorly marked highway intersection is a
risk hazard; it poses the risk impact of a collision and personal injury. Risk impact in projects can be specified in terms of time,
cost, performance, publicity, pollution, and so on. For example, the impact of insufficient resources might be failure to meet the
target date.
Risk impact can be expressed as a qualitative rating su as high, medium, or low based upon a manager’s judgment about
the impact. For example, a risk leading to a sedule delay of 1 month might be considered “medium impact,” whereas a 3-
month delay would be deemed “high impact.”
Risk impact also can be expressed as a numerical measure between 0 and 1.0, where 0 is “not serious” and 1.0 is
“catastrophic.” Again, the rating is subjective and depends upon judgment. Table 11.4, for example, represents judgments about
the impacts associated with various tenical, cost, and sedule situations, and suggested impact value ratings associated with
ea of them.
10 e assigned risk impact values are largely subjective—even when derived from empirical data.
Table 11.4 Impact values for different tenical, cost, and time situations.
Impact
value
Tenical impact (TI) Cost impact (CI) Sedule impact (SI)
0.1 (low) Minimal impact
No cost increase; within
budget
Negligible sedule slip; compensated by
sla time
0.3 (minor) Small performance reduction <10% increase Minor (< 1 month)
0.5 Moderate performance 10-25% increase Moderate (1-3 months)
(moderate) reduction
0.7
(significant)
Significant performance
reduction
25-50% increase Significant (>3 months)
0.9 (high)
Tenical goals possibly not
aievable
>50% increase Large (unacceptable)
Adapted from Roetzheim W. Structured Computer Project Management. Upper Saddle River, NJ: Prentice Hall; 1988, pp. 23-26.
Example 11.2: Estimating Risk Likelihood and Risk Impact in New Tenologies
Risk assessment in new tenologies is, well, difficult. e risk of a serious problem can stem from a ain of events (e.g. a
maine malfunctions, a sensor does not detect it, an operator takes the wrong action), and to assign the probability of the
risk requires identifying all the events in the ain, estimating the probability of ea, and combining the probabilities
together. Managers and designers can try to think of every event, but they can never be sure they haven’t missed some.
When a project involves new tenologies, the estimates are largely guesses. In 1974, MIT released a report stating that the
likelihood of a nuclear reactor core meltdown is one every 17,000 years. e report said a meltdown in a particular plant
would occur only aer many hundreds of years of operation, yet less than 5 years later, a reactor at ree Mile Island
suffered a partial meltdown and released radioactivity into the atmosphere.
11
e space shule is another case: NASA originally put the risk of a catastrophic accident at 1 in 100,000, but aer the
Challenger disaster revised it to 1 in 200. With the additional loss of Columbia (the second loss in 113 missions) the actual risk
became 1 in 56. e shules originally were design-rated for 100 missions, yet Columbia broke up during its 26 th.
12
Few data
points (five operational shules and 113 missions over 20 years) in combination with incredible complexity made it
impossible to accurately predict the risks for the shule system, yet for many projects, the data available for estimating
probabilities are even sparser.
Estimating impacts is equally difficult, and experts from different fields given identical facts oen rea different
conclusions. In one survey that rated the hazards of nuclear waste using a 17-point scale, biologists rated it 10.1, geologists
8.3, and physicists 7.3.
13 Risk assessment depends on culture and training and is never completely rational; because of this, it
should be based upon the opinions of many experts representing a range of disciplines.
Just as the likelihoods for multiple risks can be combined, so can the impacts from multiple risk sources. A composite
impact factor (CIF) can be computed using weighted average,
CIF = (W1)TI + (W2)CI + (W3)SI (11.2)
where W1, W2, and W3 have values 0 through 1.0 and together sum to 1.0. CIF will range from 0.0 to 1.0, where 0 means “no
impact” and 1.0 means “the most severe impact.” co Example 11.3 illustrates.
Example 11.3: Computation of Composite Impact Factor
A particular failure to meet certain tenical goals is expected to have minimal impact on tenical performance and be
corrected within 2 months at a cost of 20 percent. erefore, from Table 11.4:
TI = 0.1, S1=0.5, CI=0.5
Suppose the most important criteria are tenical performance, followed by the sedule, then cost, and the weights assigned
to the criteria are 0.5, 0.3, and 0.2, respectively. erefore, from equation (11.2):
CIF= (0.5)(0.1) + (0.3)(0.5) + (0.2)(0.5)=0.22
Equation (11.2) assumes that the risk impacts are independent. If they are not, the equation does not apply, and the single
value impacts must be treated jointly, an example being “the impact of both a 20 percent increase in cost and a 3-month
sedule slip is rated as 0.6.” Application of this CLF is discussed in the next section.
See Chapter 19
Another way to express risk impact is in terms of what it would take to recover from, or compensate for, an undesirable
impact. For example, suppose that use of a new tenology poses a risk of not meeting performance requirements. e plan is
to test the tenology, but then, if the tests reveal poor performance, to abandon it and instead use a proven approa. e risk
impact would be the impact of switing tenologies in terms of sedule delay and additional cost, for example, 4 months
and $300,000.
Risk impact should be assessed for the entire project and articulated with the assumption that no response or preventive
measures are taken. In the previous instance, $300,000 is the anticipated expense under the assumption that nothing will be
done to avoid or prevent the failure of the new tenology. is assessed impact will be used as a measure to evaluate the
effectiveness of possible ways to reduce or prevent risk hazards, as discussed later.
14
Risk consequence
Earlier, the notion of risk was defined as being a function of risk likelihood and risk impact; the combined consideration of both
likelihood and impact is referred to as the risk consequence or risk exposure.
e most common way, mathematically, to express risk consequence is,
Risk consequence = (Likelihood) x (Impact) (11.3)
Using the previously computed likelihood (CLF) of 0.34 (Example 11.1) and impact (CIF) of 0.22 (Example 11.3), the risk
consequence rating, RCR, is
RCR = (CLF) x (CIF) = (0.34) x (0.22) = 0.078
RCR ranges in value from 0 to 1.0, and a very small RCR su as 0.078 might be judged “inconsequential.” Assessing values
of RCR as being high, medium, or low is subjective, and the principal use of RCR is to compare and prioritize risks—to separate
those that can likely be ignored (small RCR, low consequence) from those that must be heeded (large RCR, high consequence).
Risk consequence can be expressed in other ways, too. For example, suppose the likelihood associated with a risk is 0.40,
and, should the risk materialize, its estimated impact would be delaying the project by 4 months and increasing the cost by
$300,000. e risk consequences for time and cost are thus
Risk consequence time(RT) = (4 months)(0.40) = 1.6 months =6.4 weeks
Risk consequence cost(RC) = ($300,000)(0.40)=$12,000
ese are “expected value” risk consequences, or what the average outcomes would be if the situation were repeated a
large number of times. e concept of expected value is further discussed in the Appendix to this apter.
A disadvantage of using expected value is that it assumes people are “risk neutral,” whi they are not. For example, you
might be willing to play a game with a 50 percent ance of losing $10 (i.e. RC = $5), but would you still play it with a 10-6
percent ance of losing $5,000,000 (RC = $5 also)?
e magnitude of the consequences—whether high, medium, or low—as a function of the specified likelihood
and impact values can be determined by ploing the values on a diagram su as Figure 11.4. Just as the
likelihood and impact values are subjective, so is the positioning of the isobars demarcating regions of high,
medium, and low risk consequence. Interesting to note is that this method is analogous to those used to assess
projects, discussed in Chapter 19; a qui comparison of Figure 11.4 and Figure 19.5 reveals the similarity.
See Chapter 10
See Chapter 8
Figure 11.4
Risk consequence as a function of likelihood and impact.
e method is also similar to the failure mode and effect analysis tenique discussed in Chapter 10. Both
methods identify the consequences of risk, although FMEA is directed specifically at risks in tenical systems.
PERT
e PERT and Monte-Carlo simulation methods discussed in Chapter 8 can be used to account for risk in project
seduling and to estimate additional time needed to compensate for risks in meeting project deadlines.
e PERT method accounts for risk by using three time estimates for ea project activity: a, m, and b
(optimistic, most likely, and pessimistic times, respectively). Greater risk in an activity is reflected by a greater
spread between a and b and especially between m and b. For an activity with no perceived risk, a, m, and b would be identical;
any risk hazards identified are accounted for by raising the values of b and m or by moving b farther from m.
With PERT, recall it is the expected time, not m, that is the basis for seduled times, where the expected time is the mean
of the beta distribution,
us, for a particular activity with given optimistic and most-likely values (a and m), using a larger value of b will result in
a larger value of t_. is logically allows more time to complete the activity and compensate for risks. In addition, however,
the larger value of b also results in a larger time variance for the activity because
is larger V will result in a larger variance for the project completion time, whi would spur the cautious project manager
to add a time buffer (sedule reserve) to the project sedule.
Risk priority
Based upon the computed risk consequences, the risk sources can be listed on a risk register or risk log and those with mediumto-high consequences given a careful look. Project team members, managers, subcontractors, and customers review the list and
prepare appropriate responses to them. Table 11.5 is an example risk register showing rank-ordered risk sources and mitigation
responses.
One drawba with using expected value consequences to prioritize risks is that very low likelihood risks might be ignored
even when they have severe or catastrophic impact. Suppose, for example, the impact of a project failure is 1,000 fatalities. If
the risk likelihood is infinitesimal, then the expected consequence will be very small (tiny likelihood of many fatalities) and
hence the risk relegated a low priority.
15
In a complex system with a large number of relationships where joint failures in several of them would lead to system
failure, it is common to ignore su failures in the hope they will not occur. Usually the likelihood of joint failure is very low.
Very low, however, is not the same as impossible, and a failure with terrible impact should never be ignored, regardless of how
small the expected value. For example, the emical plant accident at Bhopal, India, has been aributed to over 30 separate
causes, their joint probability being so small as to be beyond comprehension. Yet they all did happen, causing an accident that
resulted in between 1,800 to 10,000 deaths and 100,000 to 200,000 injuries.
16 Similarly, the nuclear accident at Chernobyl was the
result of six errors in human action, any one of whi, if absent, would have precluded the accident. But despite the minuscule
likelihood, all six did happen, resulting in an accident that immediately caused several dozen deaths, several hundred
hospitalizations, and 135,000 evacuations, plus later an estimated 5,000 to 24,000 deaths from cancer in the former Soviet Union
and many more countries throughout Europe and Asia.
17 e lesson: any risk with a severe impact should never be ignored, no
maer how small the likelihood.
Among risks threatening projects everywhere are those associated with climate ange. Although both the likelihood and
impacts of su risks are difficult to assess, most scientists and planners agree that for many projects, the likelihoods, impacts,
and hence consequences of su risks will only increase.
Example 11.4: Assessing Vulnerability to Climate Change
Among the risks imposed by anticipated adverse impacts of climate ange (CC) are rise in sea level, frequency and ferocity
of storms, average sea and atmospheric temperatures and urban heat islands, droughts, flooding, coastal erosion, wild fires,
and landslides. Su impacts threaten the viability of projects and the life cycles of the end-item systems they create.
Researers and planners are grappling with ways to address these impacts. One example is the European Commission’s
guidelines for managing projects.
18 e guidelines call for (1) assessment of a project’s vulnerability and risk due to hazards
posed by CC and (2) adapting the project to increase its “resilience” to the hazards of CC.
Table 11.5 Risk registry.
Risk ID
Number
Risk Source or Condition
(Highest Consequence
Ranked First)
Functional
Area
Impacted
Risk
Impact
(1–5)
Risk
Likelihood
Consequence
Rating
Effect on
Project if Risk
Source
Materializes
Action to
Mitigate or
Eliminate Risk
19
Creative Robotics
soware does not
perform to customer
requirements.
Application
Development
5 75% 375
Sedule
delays. Cost
of hiring a
replacement
vendor.
Test drone
soware at
IBC site;
incorporate
lessons learned
from Godzilla
and Mothra
projects. Apply
K-P standards
to all CRC
tasks.
6
Unknown site factors
encountered during
installation of storage
ras and computer
systems.
Site
Operations
4 90% 360
Seduling
delays and
possible
system
reengineering.
Increased
system/
project costs.
Use traing
meanism to
identify issues
as they occur.
Be ready to
shi sedule
and resources
as necessary.
29 Customer is unable to
move operations on time
to alternative site during
installation.
Customer
Relations,
Site
Operations
5 70% 350 Entire
installation
delayed.
Assist
customer in
developing
plan/sedule
to move to
alternative site.
Rehearse move
1 week in
advance.
12
Robotic drones perform
poorly due to multiuser
code employed on a
single-user code system.
Application
Development
4 75% 300
Increased
retrieval/
placement
times.
Inability to
meet contract
requirement.
Perform
multiuser code
testing at IBC
site. Ridgeway
team on call
for te
support.
4
Hyper-Drive system
won’t meet installation
sedule, whi requires
uploading SKU data to
new system.
Application
Development
4 75% 300
Difficulty in
handling
some SKUs.
Could require
a ange in
upload
strategy.
Utilize lessons
learned from
Godzilla
project for
Hyper-Drive.
Have second
Hyper-Drive
available on
standby.
32
Excessive manual
anges during postallocation process.
Site
Operations
4 70% 280
System
startup delay.
Aer initial
system test,
determine
extent of
excess work
and readjust
resources.
3
Conversion of Wildnight
to DBA and redesigned
application.
Application
Development
4 65% 260
If not handled
properly,
potential
error in
buet
placement.
Extensive
quality
assurance of
process.
Dedicate
proper staffing.
Training early
in project.
Not every project is affected by CC. When there is reason to suspect a project or the life cycle of its end-product might
be affected, the project’s vulnerability to CC hazards should be assessed, where
Vulnerability = Sensitivity x Exposure
Sensitivity means “how sensitive to CC hazards is this specific type of project?” irrespective of project location. For example,
any tunneling, subway, or water-spanning bridge project is threatened by floods, regardless of project location. e analysis
of sensitivity should address the various themes or aspects of the project, su as on-site assets, resources needed, outputs
(products and services), and transport links.
For example, Table 11.6 represents a river-spanning bridge project.
19 Since, in general, aspects of su projects (e.g.
elevation of the bridge and roads leading to it) depend on water levels, the CC hazard of flooding might be considered a
serious threat. Other CC hazards su as temperature rise and drought would pose less serious threats.
Table 11.6 Sensitivity of features of river-spanning project to climate hazards.
Exposure means “how exposed to CC hazards is the location of this particular project?” irrespective of the project type.
For example, any project located by a river or in a coastal low-lying plain might be threatened by floods but less threatened
by temperature rise or drought (e.g. coastal cities like London, Miami, Tokyo, Mumbai, Guangzhou, Dhaka, Jakarta, Lagos).
Exposure has two parts: exposure in the current climate and exposure in the future climate (as predicted by, say, climate
models). In many cases, exposure to CC hazards can be expected to worsen over time. Table 11.7 reflects worsening threats
to low-lying coastal areas from the hazards of floods, heat rise, and drought.
Table 11.7 Exposure of low-lying coastal area locations to climate hazards.
Summarizing the threats in Tables 11.6 and 11.7:
Flood: highest sensitivity score is High; highest exposure score is High
Heat: highest sensitivity score is Medium; highest exposure score is Low
Drought: highest sensitivity score is Medium; highest exposure score is Medium.
Vulnerability to specific CC hazards, whi is a function of both sensitivity and exposure, can be expressed in a table that
combines the two. For example, Table 11.8 shows the vulnerability to CC threats for a river-spanning bridge project located
in a low-lying coastal area: the project has high vulnerability to floods and medium vulnerability to heat rise and drought.
Table 11.8 Vulnerability of project to climate hazards.
Important to note is that with climate ange, many things can be expected to ange. For example, a project that is
assessed today to have low exposure to a hazard might in 20 years have medium exposure and in 100 years high exposure.
See Chapter 12
erefore, with appropriate forethought, project planners might take action today for a current low-risk situation that
prevents it from becoming high risk in the future.
e purpose of the vulnerability assessment is to raise awareness about the potential climate ange impacts on a project.
In response to situations rated as high or medium vulnerability, project planners would instigate a more thorough risk
assessment and adopt measures to reduce the project’s vulnerability to or increase its resilience against CC threats.
11.4 Risk response planning
Risk response planning addresses the maer of how to deal with risk. In general, the ways of dealing with a risk are to transfer,
avoid, reduce, accept, or contingency plan for it.
Transfer risk
Risk can be transferred between customers, contractors, and other parties using insurance, contracting, and contractual
incentives.
Insurance
e customer or contractor purases insurance to protect against a wide range of risks, including those associated with:
Property damage or personal injury suffered as a consequence of the project.
Damage to materials while in transit or in storage.
Breakdown or damage of equipment.
e of equipment and materials.
Siness or injury of workers, managers, and staff.
Fluctuations in exange rates on imported items, or “forward cover.”
Subcontract work
Risk can arise from uncertainty about how to approa a problem or situation. One way to avoid su risk is to hire contractors
that specialize in those specific problems or situations. For example, to minimize the financial risk associated with the capital
cost of tooling and equipment for production of a large, complex system, a manufacturer might subcontract the production of
the system’s major components to suppliers familiar with them. is relieves the manufacturer of the financial risk associated
with the production tooling and equipment. But, as mentioned, transferring of one kind of risk oen means inheriting another.
For example, in subcontracting work for the components, the manufacturer now must rely on outsiders, whi increases the
risks associated with quality control and seduling. But su risks oen can be reduced through the contract agreement and
careful management of the subcontractors.
Choice of contract
Risk and contracts are inextricably linked since risk can be transferred or allocated to other parties through the
use of the appropriate kind of contract. is is addressed in depth in Chapter 12, but briefly, it works like this. In
a fixed-price contract, the contractor assumes most of the risk of a cost overrun; in a cost-plus contract, the
customer assumes most of the risk. When the statement of work is clear and well-defined, a contractor will
readily accept a fixed-price contract because the work is certain and unlikely to ange. When, however, the scope of the work
is unclear and the potential for ange is great, the contractor prefers a cost-plus contract, whi will cover all expenses
incurred in the event of anges. Sometimes the two parties negotiate to rea an agreement that neither finds too risky.
But not all risks can be transferred. Even with a fixed-price contract, where ostensibly the contractor assumes the risk of
overruns, the customer will nonetheless incur damages and hardship should the project fall behind sedule or the contractor
declare bankruptcy. e project still must be completed, and someone has to pay for it. To avoid losses, a contractor might feel
pressured to cut corners, whi increases the customer’s risk of receiving a subpar-quality end-item.
Risk responsibility
Risk may be transferred, but never is it completely “offloaded.” Usually, a warranty or guarantee in the contract specifies the
time or place at whi the risk is transferred from one party to another. For instance, when an item is procured and shipped
from abroad, the risk of damage usually remains with the seller while the item is being shipped, but as soon as it is hoisted over
the rail of the ship, risk is transferred to the buyer.
A party willing to accept high responsibility for risk in a project will usually demand a high level of authority over the
project. For example, a customer willing to risk poor quality or a cost overrun will almost certainly insist on a large measure of
control over aspects of the project that influence quality and cost. Parties bearing high risk will also usually insist on
compensation to cover the risks. In cost-plus contracts, for example, the contractor’s risk is covered by compensation for all
expenses, but the customer’s risk is covered by his management oversight of the contractor to prevent expense abuses.
Avoid risk
Risk can be avoided by su measures as increasing supervision, eliminating risky activities, minimizing system complexity,
altering end-item quality requirements, anging contractors, and incorporating redundancies. But aempts to avoid risk oen
entail the addition of innumerable management controls and monitoring systems, whi tend to increase system complexity
and, perversely, introduce new sources of risk. Risk avoidance measures can also diminish payoff opportunities. Many risk
factors can be avoided, but not all, especially in complex or leading-edge projects. Resear and new product development
projects are inherently risky but offer potential for huge benefits later on. Because the size of the risk is oen proportionate to
the potential payoff, rather than avoiding risk, it is beer to try to reduce risk to an acceptable level.
Reduce risk
Among the ways to reduce tenical risk (its likelihood, impact, or both) are to:
20
Employ the best tenical team.
Base decisions on models and simulations of key tenical parameters.
Use mature, computer-aided system engineering tools.
Use parallel development on high-risk tasks.
Provide the tenical team with incentives for success.
Hire outside specialists for critical review and assessment of work.
Perform extensive tests and evaluations.
Minimize system complexity.
Use design margins.
e laer two points deserve further explanation. In general, risk and uncertainty increase with system complexity: the
more elements in a system and the more they are interconnected, the more likely an element or interconnection will go wrong
and impact other elements. us, minimizing complexity through reorganizing and modifying elements in product design and
project tasks reduces the risk. For example, decoupling of activities and subsystems, that is, making them independent of one
another, prevents a failure in one activity or subsystem from spreading to others.
Incorporating design margins into design goals is another way to reduce risk associated with meeting tenical
requirements.
21 A design margin is a quantified value that serves as a safety buffer held in reserve and allocated by
management. In general, a design margin is incorporated into a requirement by seing the target design value to be stiffer or
more rigorous than the design requirement. In particular:
See Chapter 8
See Chapter 10
Target Value = Requirement +Design Margin
By striving to meet a target value that is stiffer than the requirement, the risk of not meeting the requirement is reduced.
Example 11.5: Design Margin Application for the Spaceship
e weight requirement for a spaceship navigation system is 90 tbs maximum. To allow for the difficulty of reaing the
requirement (and reduce the risk of not meeting it), the design margin is set at 10 percent or 9 lbs. us, the target weight
for the navigation system becomes 81 lbs.
A design margin is also applied to ea subsystem or component within the system. If the navigation system is entirely
composed of three major subsystems, A, B, and C, then the three together must weigh 81 pounds. Suppose C is an OTS item
with a weight of 1 lb that is fixed and cannot be reduced. But A and B are being newly developed, and their design
requirements have been set at 50 lbs for A and 30 lbs for B. Suppose a 12 percent design margin is imposed on both
subsystems; thus, the target weights for A and B are 50 (1.0 – 0.12) = 44 lbs and 30 (1.0 -0.12) = 26.4 lbs, respectively.
Design margins provide managers and engineers a way to address problems in an evolving design. Should the target
value for one subsystem prove impossible to meet, then portions of the margins from other subsystems or the overall
system can be reallocated to the subsystem. Suppose subsystem B cannot possibly be designed to meet its 26.4-lb target, but
subsystem A can be designed to meet its target; thus, the target for B can be increased by as mu as 3.6 lbs (its margin
value) to 30 lbs; if that value also proves impossible to meet, the target can be increased by another 6 lbs (subsystem As
original margin value) to 36 lbs. Even if that value cannot be met, the target can be increased again by as mu as another 9
lbs (the margin value for the entire system) to 45 lbs. Even with these incremental additions to B’s initial target value, the
overall system would still meet the 90-lb weight requirement.
While design margins help reduce the risk of not meeting requirements, they encourage designers to exceed requirements—
for example, to design systems that weigh less than required but that cost more. us, the margins must be carefully set so as
to reduce the risks while not increasing the costs.
Design margins focus on risks associated with meeting tenical requirements. Among ways to reduce risks associated with
meeting sedules are:
22
Create a master project sedule and strive to adhere to it.
Sedule the most risky tasks as early as possible to allow time for failure recovery.
Maintain close focus on critical and near-critical activities.
Put the best workers on time-critical tasks.
Provide incentives for overtime work.
Shi high-risk activities in the project network from series to parallel.
Organize the project early, and staff it adequately.
Insert project and feeding buffers (contingency reserves) into the sedule, as discussed in Chapter 8.
To reduce the risk associated with meeting budget or cost targets:
23
Identify and monitor the key cost drivers.
Use low-cost design alternatives.
Verify system design and performance through modeling reviews and assessment.
Maximize usage of proven tenology and commercial off-the-shelf equipment.
Provide contingency reserves in project budgets.
Perform early breadboarding, prototyping, and testing on risky components, as discussed in Chapter 10.
e last way is especially powerful. Breadboards and prototypes, that is, test moups and models, enable
ideas to be tested experimentally so designs can be corrected early in the project.
24 is greatly reduces the need
for later design anges, whi can be costly. e following illustrates other ways to reduce risk.
Example 11.6: Managing Sedule and Cost Risk at Vancouver Airport
25
See Chapter 5
e expansion project at Vancouver International Airport involved constructing a new international terminal building (ITB)
and a parallel runway. e sedule for the $355-million project called for full operation of the ITB less than 3.5 years aer
the project was approved and opening of the new runway 5 months aer that. e project team identified the following as
major risks in meeting the tight budget and sedule constraints:
1. Risk in structural steel detivery and erection. Long procurement lead times from steel mills and difficulties in seduling
design, fabrication, and erection make big steel projects risky. Recognizing this, the project team awarded the structural
steel contract very early in the project to allow ample time to design, procure, fabricate, and erect the 10,000 tons of steel
required for the ITB. As a result, the ITB was completed on time.
2. Material handling risk. Millions of cubic meters (cum) of earth had to be moved, and over 4 million cum of sand were
required for concrete runways and taxiways. e project team developed an advance plan to enable coordinated
movement of earth from one locale to another and used local sand in the concrete. is saved substantial time and
money, enabling the runway to be completed a year ahead of sedule.
3. Environmental risk. Excavations and transport of earth and sand by barges threatened the ecology of the Fraser River
estuary. ese risks were mitigated by advance planning and constantly identifying and handling problems as they arose
through cooperative efforts of all stakeholders.
4. Functionality risk. Because new tenologies pose risk, the project team adopted a policy of using only proven (OTS)
components and tenology whenever possible. Consequently, all ITB systems were installed with few problems and
were operational according to sedule.
One additional way to reduce the risk of not meeting budgets, sedules, and tenical performance is to do whatever
necessary to aieve the requirements, but nothing more.
26 e project team might be aware of many things that could be done
beyond the stated requirements; however, in most cases, these will consume additional resources and add time and cost. Unless
the customer approves the added time and cost, all extras should be avoided.
Contingency planning
Contingency planning implies anticipating risks that might arise and then preparing a course of action to cope with them. e
initial project plan is followed, and throughout execution the risks are closely monitored. Should a risk materialize as indicated
by a trigger symptom, the contingency plan is adopted. e contingency plan can be a post-hoc remedial action to compensate
for a risk impact, an action undertaken in parallel with the original plan, or a preventive action initiated by a trigger symptom
to mitigate the risk impact. Multiple contingency plans can be developed based upon “what-if” scenarios for multiple risks.
Accept risk (do nothing)
Not all impacts are severe. If the cost of avoiding, reducing, or transferring the risk is estimated to exceed the benefits, then
“do nothing” might be the best alternative. In Figure 11.4, the do-nothing strategy would be osen for risks falling in the “low
consequence” region (except when the impact is potentially catastrophic, whi is off the art). Besides, sometimes nothing
can be done to avoid, reduce, or transfer a risk, in whi case the risk must be accepted, regardless of the consequence.
Fortunately su situations are rare.
Responding to a risk sometimes creates a new, secondary risk (see Example 5.1 in Chapter 5). When preparing a risk
response plan, the project management team should e for these before implementing the plan.
11.5 Risk monitoring and response
Identified risks are documented in a risk log or risk register and ordered by rank, greatest risk consequence first. For risks with
serious consequences, mitigation plans are prepared and strategies adopted (transfer, reduce, avoid, or contingency); for those
of lile or no consequence, nothing is done (accept).
ereaer, the project should be continuously monitored for symptoms of previously identified risks as well as for newly
emerging risks (not previously identified). Known risks may take a long time before they start to produce problems. Should a
symptom rea the trigger point, a decision is made as to the course of action, whi might be to institute a prepared plan or to
convene a meeting to find a solution. Sometimes the response is to do nothing; however, everything should be a conscious
oice, not an oversight, and be traed aerward to ensure it was the right oice.
All risks deemed critical or important are traed throughout the project or the phases to whi they apply; to guarantee
this, someone should be assigned responsibility to tra and monitor the symptoms of every important risk.
Altogether, the risk log, mitigation strategies, monitoring methods, people responsible, contingency plans, and sedule and
budget reserves constitute the project risk management plan. e plan is continuously updated to account for anges in risk
status (old risks avoided, downgraded, or upgraded; existing risks reassessed; new risks added). e project manager (and
sometimes other managers and the customer) is alerted about emerging problems; ideally, the project culture embodies candor
and honesty, and people readily notify the project manager whenever they detect a known risk materializing or a new one
emerging.
11.6 Project management is risk management
Risk management supplements and is a part of other project management practices su as requirements definition, seduling,
budgeting, ange control, and performance traing and control. Managers use all of these to identify and assess risks so they
can proactively reduce risks or plan for the consequences. If, for example, a project must finish in 9 months but is estimated to
take 12, management can take a multitude of steps to increase its likelihood of finishing in 9.
Ideally, risk identification, assessment, and response planning is treated as a formal aspect of project planning, and the
resulting risk management plan is integrated into the execution plan alongside the sedule, budget, quality plan, ange
control plan, communication plan, and so on. During project execution, risk traing is incorporated as a measure within the
traing and control process. Ideally, many project team members and other stakeholders are involved in risk identification,
response planning, and risk traing. Whenever someone detects a risk, she contacts the project manager or risk officer
(described later); then, depending on the severity of the risk, messages (pop-ups) are sent to others in the project informing
them of the risk.
Of course, not all projects need comprehensive risk management. On small projects, a small, well-paid, and motivated staff
can usually overcome difficulties associated with the risks and, if not, the consequences are usually small anyway. In larger
projects, however, where the stakes and risks of failure are high, risk management is important. Su projects require
awareness and respect for all the significant risks—safety, legal, social, political, tenical, and financial.
Risk management principles
Every project for whi non-trivial risks are known or suspected should have a risk management plan. e plan should specifyfor a particular project-the procedures for identifying and assessing risks, per-son(s) involved in the risk management process
and their specific responsibilities, methods for assessing and prioritizing risks, guidelines for risk mitigation and contingency
planning, and methods for traing and reporting risks and addressing emergent, unforeseen risks. e plan should address
general principles for managing risks, including the following:
27
Create a risk profile for ea risk source; this includes the risk likelihood, cost and sedule impact, and contingencies to be
invoked. e profile should also specify the earliest visible symptoms (trigger events) that would indicate when the risk is
materializing. In general, high-risk sources should have lots of eyes wating, and contingency plans should be updated to
reflect project progress and emerging risks. Figure 11.5 illustrates a risk profile template, whi would include a summary of
everything known about a risk. is document would be retained in a binder or library, updated as necessary until the risk is
believed to no longer exist and is “closed out.”
Appoint a risk officer to the project, someone whose principal responsibility is the project’s risk management. is should
not be the same person as the project manager; he should not be a can-do person but instead a devil’s advocate, identifying
all the reasons something might not work—even when everyone else believes it will.
Include in the budget and sedule a calculated risk reserve—a buffer of money, time, and other resources to deal with risks
should they materialize. e reserve is used at the project manager’s discretion to cover risks not specified by ea risk’s
profile. It may include the RT or RC values (see the Appendix to the apter) or other amounts. It is usually not associated
with a contingency plan, and its use might be constrained to particular applications or areas of risk. e project manager
keeps the exact amounts held in the reserves strictly confidential (else a project will tend to consume whatever the amount
held), although others should know there is a reserve available (otherwise they will build in their own secret reserves).
Establish communication annels (sometimes anonymous) within the project team to ensure any bad news gets to the
project manager quily, risks are continually monitored, and risk status is continuously assessed and communicated.
Specify procedures to ensure accurate and comprehensive documentation of proposals, project plans, ange requests,
progress reports, and the post-completion summary report. In general, the beer the documentation of past projects, the
more information available for planning future similar projects and identifying possible risks.
Expect the unexpected
Having identified myriad risk hazards and consequences and prepared all kinds of specialized controls and safeguards, people
can be led to believe that everything that possibly could go wrong has been anticipated and accounted for; thus, when
something still goes wrong, it cates them completely off guard. Although it is true that risk planning can cover many or most
risks, it can never cover all of them. us, risk planning should be tempered with the concept of “non-planning” or Napoleon’s
approa, whi is to expect that something surely will go wrong and to be ready to find ways to deal with it as it emerges. is
is as important to coping with risk as is extensive planning and believing that all risks have been covered.
28
Figure 11.5
Document for risk profile and management of an identified risk.
Example 11.7: Managing Risks as ey Arise—Development of the F117 Stealth Fighter
29
An example of how to manage risk in R&D projects is the F117 Stealth Fighter program, aimed at developing a revolutionary
new “low observable” (difficult to detect with radar) aircra capable of high-precision aas on enemy targets. e F117
involved high risk because many lessons had to be learned during the program and significant allenges had to be
overcome. But managers expected allenges would occur throughout the program, from early design and testing, through
to evaluation and final deployment. To handle the risks, numerous decisions were made on the spot between program
managers for Loheed (contractor) and the Air Force (customer). e program was set up for rapid deployment of
resources to solve problems as they arose. Managers from the customer and the contractor worked closely to minimize
bureaucratic delays. Sedules were optimistic and based on assumptions that everything would work; however, everyone
throughout the management ain knew the risks and the allenges to overcome, so problems never came as a surprise or
threatened program support—a good example of managing risk as opposed to avoiding risk.
Risk management caveats
For all the good it can provide, risk management can create risks. Most every philosophy, procedure, or prescription has
caveats, and that is true of risk management as well. Misunderstanding or misapplication of risk management concepts can
stymie a project by fooling people into thinking they have nothing to worry about, whi can actually leave them worse
prepared for dealing with emerging problems they didn’t anticipate.
Having created a risk management plan, managers might be emboldened to take risks they otherwise might not take. Mu
of the input to risk analysis is subjective; it addresses what might happen—not what will happen. Data analysis and planning
gives people a sense of having power over events, even when the events are ancy. Underestimating the risk likelihood or
impact can make consequences seem insignificant, leading some people to venture into dangerous territory that common sense
would disallow. For example, the security of seat belts and air bags encourages some drivers to take risks su as driving too
close behind the next car or accelerating through yellow lights. e result is an increased likelihood of an accident.
Repeated experience and good documentation are vital ways to identify risks, but they cannot guarantee that all important
risks will be identified. Same and similar outcomes that have occurred repeatedly in past projects eventually deplete people’s
capacity to imagine anything else happening. As a result, some risks become unthinkable. Even sophisticated computer models
are worthless when it comes to dealing with the unthinkable because a computer cannot be instructed to analyze situations that
are beyond the imagination of the humans that created them. Experience provides but a sample of possibilities, not the entire
population.
Managing risk does not mean eliminating it, although some managers don’t know that. e prime symptom of “trying to
eliminate risk” is micromanagement: excessive controls and documentation requirements and trivial demands for the
authorization of everything. Projects inherently entail uncertainty and risk. Micromanagement is seldom appropriate, and for
some projects, it can be disastrous, particularly when the projects involve the new, untried, and untested. When management
tries to eliminate risk, it stifles innovation and, say Aronstein and Piccirillo, “forces a company into a plodding, brute force
approa to tenology, whi can be far more costly in the long run than a more adventurous approa where some programs
fail but others make significant leaps forward.”
30 e appropriate risk management strategy for most projects is to try to
accommodate and mitigate risk, not to avoid or eliminate it.
11.7 Summary
Project risk management involves identifying the risks, assessing them, and planning appropriate responses. Identifying project
risks starts in the project conception phase. Project risks stem from many sources, su as failure to define and satisfy customer
needs and market requirements; tenical problems arising in the work; weather, labor, and supplier problems; competitors’
actions; and anges imposed by outsiders. Su risk hazards are identified using a variety of methods that draw from
experience with past projects and scrutiny of future projects.
Of innumerable risks in projects, only the important ones need be addressed. Importance depends on the likelihood, impact,
and overall consequence of the risk. Likelihood is the probability a risk will occur, impact is the effect of the risk, and risk
consequence is the combination of the two. In general, measures of risk consequence are used to decide whi risks should
receive aention and whi can be ignored. As a precaution, however, every risk with severe impact should be carefully
considered, even when the likelihood is very small.
Risk response planning addresses ways the identified risks will be handled. Some risks can be transferred to other parties or
spread among many stakeholders or subcontractors. Some can be avoided; some can be eliminated. Sometimes, however, high
risk is associated with high benefits, and trying to eliminate the risk can also reduce the payoff. us, beer than trying to
eliminate risk is to try to reduce it to a manageable level. For areas of high risk, alternative contingency plans should be
developed.
See Chapter 19
Chapters 8 and 9
e principles for risk management include creating a risk management plan that specifies the risks, their symptoms, and
baup plans; a risk officer who is responsible for identifying and traing the risks; and a budget and sedule reserve. e
plan must specify the ways to monitor risks and emerging problems and to communicate them to the project manager. Proper
documentation from past projects furnishes lessons learned and forewarns managers about potential risks in upcoming
projects. No amount of preparation can anticipate all risks; managers should expect the unexpected and be ready to deal with
risks as they arise.
e following Appendix discusses common analytical methods for assessing risk consequences and deciding
between alternative risk responses. Similar methods are employed in project selection—the topic of Chapter 19.
APPENDIX: RISK ANALYSIS METHODS
Four common methods for risk analysis are expected value, decision trees, payoff tables, and simulation.
Expected value
Selection of the appropriate risk response sometimes depends on the risk consequences expressed in terms of the expected
value of costs or sedules.
An expected value is the average outcome of numerous repeated events. For risk assessment, expected value represents the
average outcome of a project if it were repeated many times, accounting for the possible occurrence of risk. Mathematically, it
is the weighted average of all the possible outcomes, where the weights are the likelihoods of the possible outcomes, that is
e consequence of risk on project duration, called the risk time, RT, is the expected value of the estimated time to correct for
risk, computed as
RT = (Corrective time) x (Likelihood) (11.6)
e consequence of risk on project cost, called the risk cost, RC, is the expected value of the estimated cost to correct for the
risk, computed as
RC = (Corrective time) x (Likelihood) (11.7)
For example, suppose the baseline time estimate (BTE) for project completion is 26 weeks, and the baseline cost estimate (BCE)
is $71,000. Assume that the risk likelihood for the project as a whole is 0.3, and, if the risk materializes, the project would be
delayed by 5 weeks and cost $10,000 more. Because the probability of the risk materializing is 0.3, the probability of it not
materializing is 0.7. If the risk does not materialize, no corrective measures will be necessary, and the corrective time and cost
will be nil. Hence
RT = (5)(0.3) + (0)(0.7) = 1.5 Weeks
and
RC = ($10,000)(0.3) + (0)(0.7) = $3,000
ese figures, RT and RC, are the sedule reserve and project contingency (budget reserve), respectively,
mentioned in Chapters 8 and 9. Accounting for the risk time, the expected project completion time, ET, is
ET = BTE+RT = 26+1.5 = 27.5Weeks
Accounting for the risk time, the expected project completion time, ET, is
EC = BCE + RC = 71,000 + 3,000 = $74,000
Accounting for the risk cost, the expected project completion cost, EC, is
ET = BTE(1 + Likelihood) = 26.(1.3) = 33.8 Weeks (11.8)
When the corrective time and cost cannot be estimated, then ET and EC are computed as
EC = BCE(1+likelihood) = $71,000(1.3) = $92,300 (11.9)
ese examples account for risk factors that affect the project as a whole. Another way to determine risk consequence is to first
disaggregate the project into work paages or phases and then to estimate for each the risk likelihood and corrective time and
cost. ese individual estimates are then aggregated to determine ET and EC for the entire project. is approa tends to give
more credible RT and RC estimates than do equations (11.6) through (11.9) because risks pinpointed to individual tasks or
phases can be more accurately assessed and the necessary corrective actions and associated time and costs for particular tasks
more easily identified.
Say a project has eight work paages; Table 11.9 lists cost information and EC for ea, where EC is computed as
EC=BCE+[(Corrective cost) x (Likelihood)]
As shown in Table 11.9, the EC for the project is $84,850.
Now, for the same eight projects, Table 11.10 gives time information, where ET is computed as
ET = BTE + [(Corrective time)x (Likelihood)]
Suppose the project network is as shown in Figure 11.6. Ignoring the risk time, the critical path would be J—M—V—Y—W—
X and the project BTE 26 weeks. Accounting for risk consequences, the critical path would be the same, but the duration (the
project ET) would increase to 27.9 weeks.
31
Although activities on critical and near-critical paths should be carefully monitored, in general, every activity with high-risk
consequences (high likelihood and/or high impact) should be carefully monitored, even when not on the critical path.
Increasing the project sedule and budget to account for the expected risk time or risk cost is no guarantee of adequate
protection against risk. e expected risk time and cost are the equivalent to the long-run averages, whi result from
repeating something many times; this is questionable in projects, since seldom are project activities identically repeated.
Table 11.9 EC computation.
WBS Element BCE Corrective Cost Likelihood EC
J $10,000 $ 2,000 .2 $10,400
M 8,000 1,000 .3 8,300
V 16,000 4,000 .1 16,400
Y 10,000 6,000 .2 11,200
L 8,000 2,000 .3 8,600
Q 9,000 2,000 .1 9,200
W 5,000 1,000 .3 5,450
X 5,000 1,500 .3 5,750
Total $71,000 $84,850
Table 11.10 ET computation.
WBS Element BTE Corrective Time Likelihood ET
J 6 1 .2 6.2
M 4 1 .3 4.3
V 6 2 .1 6.2
Y 8 3 .2 8.6
L 2 1 .3 2.3
Q 8 1 .1 8.1
W 1 1 .3 1.3
X 1 1 .3 1.3
Figure 11.6
Project network, accounting for risk time.
Decision Trees
32
A decision tree is a diagram wherein the tree “branes” represent different ance outcomes. It is used to assess whi risk
response among alternatives yields the best expected consequence.
One application of decision trees is to weigh the cost of project failure against the benefit of project success. Assume a
project has a BCE of $200,000 and a failure likelihood of 0.25 and, if successful, will yield a net profit of $1,000,000.
e expected value concept can be used to compute the average value of the project. Assuming the project could be
repeated many times, then it would lose $200,000 (BCE) 25 percent of the time and generate $1,000,000 profit the other 75
percent. us, the expected outcome would be
Expected Outcome = ( – $200,000)(0.25) + ($1,000,000)(0.75)
= $700,000
is suggests that although there is potential to net $1,000,000, it is more reasonable to use $700,000 for the BCE. It also
suggests that all project costs plus actions taken to reduce or eliminate the failure risk should not exceed $700,000.
Another application of decision trees is in deciding between alternative risk responses. Suppose a project has a BCE of $10
million, risk failure likelihood of 0.6, and a risk impact of $5 million. Two strategies are being considered to reduce the risk
likelihood (but not the risk impact):
Strategy 1 will cost $2 million and will reduce the failure likelihood to 0.1.
Strategy 2 will cost $1 million and will reduce the failure likelihood to 0.4.
e decision tree and resultant expected project costs are shown in Figure 11.7. e analysis suggests Strategy 1 should be
adopted because it has the lowest expected cost.
Another application of decision tree analysis is the expected commercial value method used in project selection, discussed
in Chapter 19.
Uncertainty and payoff tables
When there is no prior experience or historical data upon whi to estimate the likelihood, then the expected-value risk
consequence cannot be computed, and other criteria must be used to assess courses of action in the face of risk. is situation is
referred to as uncertainty, whi implies no information is available about what might occur. To determine the best strategy
under uncertainty, begin by identifying possible alternative paths the project could take in response to factors over whi
management has no control. ese different paths are called states of nature. Consider different possible strategies or actions,
and then indicate the likely outcome for ea state of nature. e outcomes for different combinations of strategies and states
of nature are represented in a payoff table.
Figure 11.7
Decision tree.
For example, suppose the success of a project to develop Product X depends on market demand, whi is known to be a
function of particular performance features of the product. e development effort can be directed in any of three possible
directions, referred to as strategies A, B, and C, ea of whi will provide the product with different performance features.
Assume also that a competing firm is developing a product that will have performance features similar to those under Strategy
A. When the product development effort ends, one of three future states of nature will exist: N1—no competing product enters
the market for at least 6 months; N2—the competing product enters the market within 6 months of Product X; N3—the
competing product is introduced before Product X. Suppose the likely profits in millions of dollars for the different
combinations of strategies and states of nature are computed (shown in Table 11.11).
e question: Whi strategy should be adopted? e answer: It all depends! If project sponsors are optimistic, they will
oose the strategy that maximizes the potential payoff. e maximum potential payoff in the table is $90 million, whi
happens for Strategy C and State of Nature N1. us, optimistic project sponsors will adopt Strategy C. In general, the strategy
oice that has the potential to yield the largest payoff is called the maximax decision criterion.
Now, if project sponsors are pessimistic, they will instead be interested in minimizing their potential losses, in whi case
they will adopt the strategy that gives the best outcome under the worst possible conditions. For the three strategies A, B, and
C, the worst-case payoff scenarios are -$20 million, $50 million, and $40 million, respectively. e best (least bad) of the three is
$50 million, or Strategy B. In general, the strategy that gives the best outcome out of multiple worst-case scenarios is called the
maximin decision criterion.
Any oice of strategy other than the best one will cause the decision-maker to experience an opportunity loss or regret.
is way of thinking suggests another criterion for oosing between strategies, the minimax decision criteria, whi is the
strategy that minimizes the regret of not having osen the best strategy. Regret for a given state of nature is the difference in
the outcomes between the best strategy and any other strategy. is is illustrated in a regret table, shown in Table 11.12. For
example, given the payoffs in Table 11.11, for State of Nature N1, the highest payoff is $90 million. Had Strategy C, the optimal
strategy, been selected, the regret would have been zero, but had strategies A or B been selected instead, the regrets would
have been $30 million ea (the difference between their outcomes, $60 million, and the optimum, $90 million). e regret
amounts for States of Nature N2 and N3 are determined in similar fashion.
Table 11.11 Payoff table.
States of Nature
Strategy N1 N2 N3
A 60 30 -20
B 60 50 60
C 90 70 40
Table 11.12 Regret table.
States of Nature
Strategy N1 N2 N3
See Chapter 8
A 30 40 80
B 30 20 0
C 0 0 20
To understand how to minimize regret, first look in the regret table at the largest regret for ea strategy. e largest
regrets are $80 million, $30 million, and $20 million for strategies A, B, and C, respectively. Next, pi the smallest of these, $20
million, whi occurs for Strategy C. us, Strategy C is the best oice in terms of minimizing regret.
Another strategy selection approa is to assume that every state of nature has the same likelihood of occurring. is is
called the maximum expected payoff decision criterion. Referring ba to the payoff table, Table 11.11, assume the likelihood of
ea state of nature is one-third; thus, the expected payoff for Strategy A given outcomes from the payoff table is
1 / 3(60) + 1 / 3(30) + 1 / 3(—20) = 23.33,or $23.33 million
e expected payoffs for strategies B and C, computed similarly, are $56.66 million and $66.66 million, respectively. us,
Strategy C would be osen as giving the maximum expected payoff. Notice in the previous examples that three of the four
selection criteria point to Strategy C. is in itself might convince decision-makers that Strategy C is most appropriate.
Simulation
Application of simulation to project seduling, illustrated in Chapter 8, gives the probability distribution of
outcomes, whi can be used to determine the probability (or likelihood) of a particular outcome su as
completion cost or time. In turn, this can be used to establish an appropriate target budget or completion date or
to prepare contingency plans. For instance, although the critical path in Chapter 8, Example 8.2, indicated the
project would be completed in 147 days, the simulated completion time distribution in Figure 8.14 indicated that it would be 155
days, on average. us, at the earliest, the target completion should be set at 155 days, although the likelihood of not meeting
that date would be 50 percent. Using the simulated probability distribution, a target completion date can be set su that the
likelihood of not meeting it is more acceptable. Alternatively, given a pre-specified project target completion date, simulation
can be used to estimate the likelihood of not meeting it and hence to decide whether to prepare contingency plans or ange
the project requirements, activities, or network.
Review estions and Problems
1. Should risks that have low likelihood be ignored? Explain.
2. How does a person’s risk tolerance affect whether he rates a risk high, medium, or low?
3. What is meant by risk of failure?
4. What factors make a project high risk?
5. Discuss the difference between internal risk and external risk. List sources of risk in ea of these categories.
6. Describe ea of the following sources of tenical risk: maturity, complexity, quality, and concurrency or dependency.
7. Briefly describe the following risk identification teniques: analogy, elists, WBS analysis, process flowarts, and
brainstorming.
8. Describe a cause-and-effect diagram. Pi a problem (effect) of your own oice and use a cause-and-effect diagram to
illustrate it.
9. A project involves developing a system with state-of-the-art hardware and soware, both complex, and where system
performance depends on another, external system that is being developed concurrently. Based on Table 11.3, and assuming
all risk factors are independent and equally weighted, what is the CLF for the project?
10. What is an influence diagram? How is it used to identify and analyze risk sources and to assign priorities to those sources?
11. Tables 11.3 and 11.4 are for illustration purposes. Discuss the general applicability of these tables to rating risks in projects.
Would you use these tables to assess the risk likelihood and impact in a project of your oice? Why or why not?
12. Are equations (11.1), (11.2), and (11.3) good ways for rating the overall likelihood, impact, and consequences of risk?
Discuss pros and cons of using these equations.
13. Discuss briefly ea of the following ways to handle risk: transfer risk, avoid risk, reduce risk, contingency plan, and accept
risk.
14. ink of a project you are familiar with and problems it encountered. List some ways the problems could have been
avoided, and explain ea of them.
15. What is a design margin? How does its application reduce risk?
16. One requirement of a power-generating system states that it must provide 500 kWh (kilowa hours) minimum output. e
system has three power-generating subsystems, X, Y, and Z. Constraints on physical size indicate that the output capacity
of the overall system will be split among the three subsystems in the approximate ratio of 5:3:2. A 3 percent design margin
is applied to the system and the subsystems. Note, because the power requirement is stated as minimum output, the target
output will be 3 percent above the requirement.
a. What is the target requirement output for the overall system?
b. What are the target requirement outputs for ea of the subsystems? (Remember, subsystem margins are in addition to
the system margin.)
c. Suppose that, at best, Subsystem X can be designed to meet only 47 percent of the power output requirement for the
overall system. Assuming that Subsystems Y and Z can be designed to meet their respective design targets, can the
output requirement for the overall system also be met?
17. List and review the principles of risk management.
18. How does risk planning serve to increase risk-taking behavior?
19. Risk management includes being prepared for the unexpected. Explain.
20. Can risk be eliminated from projects? Should management try to eliminate it?
21. How and where are risk time and risk cost considerations used in project planning?
22. Where would the maximax, maximin, and minimax regret criteria be used during the project life cycle to manage project
risk?
23. Figure 11.8 is the network for the Largesse Hydro Project:
Figure 11.8
Largesse Hydro Project.
e following table gives the baseline cost and time estimates (BCE and BTE), the cost and time estimates to correct for
failure, and the likelihood of failure for ea work paage.
Corrective
WBS Element BCE BTE (wk) Cost Time Likelihood
L $20,000 9 $4,000 2 .2
V $16,000 8 $4,000 2 .3
T $32,000 5 $8,000 2 .1
U $20,000 7 $12,000 3 .2
S $16,000 3 $4,000 1 .3
J $18,000 3 $4,000 1 .1
R $10,000 4 $4,000 3 .3
C $15,000 6 $5,000 2 .3
a. Determine the risk time and risk cost for all the WBS elements of the project.
b. Consider the risk times on noncritical paths. Whi activities and paths should be wated carefully as posing the
highest risks?
c. What is the project expected cost (EC) and expected time (ET)?
24. e geographical location of the Largesse Hydro Project threatens it with weather-associated delays and costs. e bad
weather likelihood is estimated at 0.30, with a potential impact of delaying work by 10 weeks and increasing the cost by
$20,000.
a. Ignoring the time and cost risks in Problem 23, what are the expected project completion time and completion cost
considering the weather risk?
b. What is the estimated expected project completion time and cost considering the weather risk and the risks listed in
Problem 23?
25. Soside Systems has a $100,000 fixed-price contract for installation of a new application system. e project is expected to
take 5 weeks and cost $50,000. Experience with similar projects suggests a 0.30 likelihood that the project will encounter
problems that could delay it by as mu as 3 weeks and increase the cost by $30,000. By increasing the project staff 20
percent for an additional cost of $10,000, the likelihood of problems would be reduced to 0.10 and the delay and cost to 1
week and $8,000, respectively. Set up a decision tree to show whether Soside should increase the size of the project staff.
26. Corecast Contractors has been requested by a municipality to submit a proposal bid for a parking garage contract. In the
past, the cost of preparing bids has been about 2 percent of the cost of the job. Corecast project manager Bradford Pis is
considering three possible bids: cost plus 10 percent, cost plus 20 percent, and cost plus 30 percent. Of course, increasing the
“plus percent” increases the project price and decreases the likelihood of winning the job. Bradford estimates the likelihood
of winning the job as follows:
Bid price P(win) P(lose)
P1 C + 0.1C = 1.1C 0.6 0.4
P2 C + 0.2C = 1.2C 0.4 0.6
P3 C + 0.3C = 1.3C 0.2 0.8
In all cases, the profit (if the bid is won) will be the bid price minus the proposal preparation cost, or 0.02C; the loss (bid is
not won) will be the proposal preparation cost. Prepare a decision tree for the three options. If Bradford uses the maximum
expected profit as the criterion, whi bid proposal would he select?
27. Iron Buerfly, Inc., submits proposals in response to RFPs and faces three possible outcomes: N1, Iron Buerfly gets a full
contract; N2, it gets a partial contract (job is shared with other contractors); N3, it gets no contract. e company is
currently assessing three RFPs, coded P1, P2, and P3. For P3, the customer will pay a fixed amount for proposal preparation;
for P1 and P2, Iron Buerfly must absorb the proposal-preparation costs, whi are expected to be high. Based upon project
revenues and proposal-preparation costs, the expected profits ($ thousands) are as shown:
N1 N2 N3
P1 500 200 -300
P2 300 100 -100
P3 100 50 25
To whi RFPs would Iron Buerfly respond using the three decision criteria?
28. Frank Wesley, project manager for the LOGON project, is concerned about the development time for the robotic drone
transporter. Although the subcontractor, Creative Robotics, has promised a delivery time of 6 weeks, Frank knows that the
actual delivery time will be a function of the number of other projects Creative Robotics is working on. As incentive to
speed up delivery of the transporter, Frank has three options:
S1: Do nothing.
S2: Promise Creative Robotics a future contract with Iron Buerfly.
S3: reaten never to contract with Creative Robotics again.
He estimates the impact of these actions on delivery time would be as follows:
Payoffs: Creative Robotics Workload
Strategy Slow Average Busy
S1 4 6 8
S2 3 4 7
S3 3 6 6
What strategy should Frank adopt based upon uncertainty criteria? Use criteria similar to the maximax, maximin, minimax,
and maximum expected payoff, except note that the criteria must be adapted because here the goal is to minimize the
payoff (time); this is in contrast to the usual case, whi is to maximize the payoff.
estions About the Study Project
1. What did managers and stakeholders believe were the major risks in the project?
2. In your own judgment, was this a risky project? Why or why not?
3. Was formal risk analysis performed? When was it done (in initiation, feasibility, etc.)?
4. Was a formal risk management plan created? Discuss the plan.
5. Was there a risk officer? Discuss her duties and role in the project.
6. How were risks identified?
7. How were risks dealt with (through risk transfer, acceptance, avoidance, reduction, etc.)?
8. Was a risk register used? Was it ever updated? If so, how oen?
9. Discuss the use of contingency plans and budget and sedule reserves to cover risks.
10. What risks materialized during the project, and how were they handled?
CASE 11.1 THE SYDNEY OPERA HOUSE
33
e Sydney Opera House (SOH) is a top tourist araction and landmark for Sydney and all of Australia. It is a major arts
center, although owing to its design, it is not necessarily the best place to hear opera. e SOH is visually spectacular and a
magnificent structure (Figure 11.9), but it was a nightmare to design and build.
e original concept for the SOH was a sket submied by Danish aritect Jorn Utzon. Judges selected it from an open
competition that ended with 233 entries from 11 countries. ough happy to win, Utzon was mildly shoed. e concept
that had caught the judges’ aention consisted only of simple sketes, with no plans or even perspective drawings. Utzon
faced the allenge of converting the sketes into a design from whi a structure could be built, but he had no prior
experience in designing and constructing su a large building. Because there were no plans, detailed drawings, or estimates
of needed materials, there was lile from whi to base cost estimates. No one knew how it would be built; some experts
questioned that it could be built at all. Interestingly, because the design was so unique, some people thought it would also be
inexpensive to build. e initial cost was estimated at $7 million, to be paid by the government through profits from a series
of state-run loeries.
Figure 11.9
Sydney Opera House.
Source: iSto.
Engineers reviewing the concept noted that the roof shells were mu larger and wider than any shells ever built.
Further, because they stu up so high, they would act like sails in the strong winds blowing up the harbor. us, they would
have to be carefully designed and constructed to prevent the building from blowing away!
Government managers worried that people scrutinizing the design might raise questions about potential problems and
stall the project. ey thus quily moved ahead and divided the work into three main contracts: the foundation and building
except the roof, the roof, and the interior and equipment.
As experts had warned, the SOH project became an engineering and financial debacle, lasting 15 years and costing $107
million ($100 million over the initial estimate). Hindsight is 20/20, yet from the beginning, this should have been viewed as a
risky project. Nonetheless, risks were downplayed or ignored, and lile was done to mitigate or control them.
QUESTIONS
1. Identify the obvious risks.
2. What early actions should have been taken to reduce the risks?
3. Discuss some principles of risk management that were ignored.
CASE 11.2 INFINITY & BEYOND
Infinity & Beyond, Inc., produces high-te fashion merandise. e company’s marketing department has identified a new
product “concept” through discussions with three customer focus groups. e department is excited about the new concept
and presents it to top management, who approves it for further study. Lisa Denney, senior director of new product
development, is asked to create a plan and cost breakdown for the development, manufacture, and distribution of the
product. Despite the enthusiasm of the marketing department, Lisa is unsure about the product’s market potential and the
company’s ability to develop it at a reasonable cost. To her way of thinking, the market seems ill defined, the product goals
unclear, and the product and its production tenology uncertain. Lisa asks her ief designer to create some product
requirements and a rough design that would meet the requirements and to propose how the product might be manufactured.
Aer a few weeks, the designer reports ba with requirements that seem to satisfy the marketing concept. She tells Lisa
that because of the newness of the tenology and the complexity of the product design, the company does not have the
experience to develop or even manufacture the product on its own. Lisa es out several design/development firms, asking
one, Margo-Spinner Works Company, MSW, to review the product concept. MSW assures Lisa that although the tenology
is new to them, it is well within their capability. Lisa reports this to top management, who tells her to go ahead with the
development project.
Lisa sets a fixed-price contract with MSW and gives them primary responsibility for the development effort. MSW
management had argued for a cost-plus contract, but when Lisa stipulated that the agreement had to be fixed-price, MSW
said okay, only under the condition that it be given complete control of the development work. Lisa feels uncomfortable with
the proposition, but knows of no other design company qualified to do the work, so she agrees.
QUESTIONS
1. Discuss the major sources of risk in this project.
2. What do you think about Lisa’s handling of the project so far? Would you have done anything differently?
3. Discuss what Lisa and other parties did that served to increase or decrease the risks.
CASE 11.3 THE NELSON MANDELA BRIDGE
34
Newtown, South Africa, is a suburb of Johannesburg that boasts a ri cultural heritage. As part of an aempt to help
rejuvenate Newtown, the Nelson Mandela Bridge was constructed to link it to important roads and centers of commerce in
Johannesburg. Spanning 42 electrified railway lines, the bridge (Figure 11.10) has been acclaimed for its functionality and
beauty.
La of space for the support pylons (towers) between the railway lines dictated that the bridge design would have a long
span. is resulted in a structure with the bridge de supported by stay cables from pylons of unequal height. e pylons on
the northern side are 48 meters high, and those on the southern side are 35 meters high.
e pylons are composite columns consisting of steel tubes that had to be filled with concrete aer being hoisted into the
vertical position. e decision was made to pump the concrete into the tubes through a port at the boom of ea tube. is
had to be done in a single operation. Although the tenology for casting concrete this way was not new, the columns were
the highest in South Africa, and filling them would set a world record for boom-up pumping of self-curing concrete.
Figure 11.10
Nelson Mandela Bridge, Johannesburg. Source: iSto.
e pump for the concrete was placed at ground level between the electrified railway lines, whi exposed workers to
the risks of being near continuous rail operations. e pumping method posed the risk of the stone aggregate and cement in
the concrete mixture segregating in the pylon tubes before the concrete solidified, whi would compromise the strength of
the concrete. Another risk was that the pump might fail and result in the concrete solidifying in an uncompleted pylon,
rendering further pumping of concrete from the boom impossible. Two contingencies were considered: an additional pump
on standby and completing the process by pouring concrete from the top of the pylon.
e concrete had to be transported by trus to the site, whi risked interrupting the concrete supply owing to traffic
congestion in the city.
Despite working over a busy yard with trains running ba and forth, no serious accident occurred at any time in the
420,000 labor-hours project. e pump never failed, and construction finished on time. e stay cables—totaling 81,000
meters in length—were installed and the bridge de lied off temporary supports, all while the electrified railway lines
beneath remained alive. Upon completion of the bridge, some felt that the costs incurred to reduce the risks had been
excessive; others held that the risks were too high and not enough had been done to reduce them.
QUESTIONS
1. How would you have identified the risks? (Refer also to methods in Chapter 10.)
2. Using the following table, discuss how the risks were addressed or how they could have been addressed. Include any
additional risks you can think of.
Possible Risk Event Plans to Address Risk
Accept Avoid Reduce Transfer
Contingency (Plans
and/or Reserves)
Failure to make an acceptable profit
Not finishing the construction by Nelson Mandela’s 85th birthday
Interference with rail activities
Geological structures necessitating expensive foundations
e concrete mixture segregating when pumped into the columns
A pump failure while concrete is being pumped
Interrupted supply of concrete due to trus transporting concrete
delayed in traffic
3. State whether the risks listed in the table previously are internal or external.
4. Describe how you would determine the expected values of the risks listed in the table.
5. Compile a complete list of information that you would require in order to make an assessment of the risk of a pump
failure.
6. What information do you think would have been available early in the project, and from where would you obtain it?
7. Draw a CE diagram showing different factors that could contribute to delaying the project.
8. Describe how risks are reduced over the lifespan of a project su as this one.
9. With reference to the concerns expressed upon completion of the construction, discuss the statement: “Risks always
relate to the future. ere is no su thing as a past risk.”
10. Discuss the difference between good decisions and good lu.
11. How could a manager protect himself against the risk of making a decision that might later have negative implications?
CASE 11.4 SUNRISE BEACH DEVELOPMENT
A developer purased a plot of land located in a prime real-estate district near the downtown of a large Florida city. e
developer thinks the site, currently occupied by a long-closed factory, is ideal for two 10-to 20-story buildings to contain
hotels, retailers, and condominiums. Located by the Atlantic Ocean, the site, to be called Sunrise Bea, enjoys a large stret
of bea that, the developer speculates, would draw numerous tourists and affluent professionals who work in the nearby
city. A quarter of the land is marshy area occupied mostly by wildlife, and the developer is considering filling this in and
constructing a third high-rise building. To lure tourists, the developer is also considering mounting a small roller-coaster atop
one of the buildings, similar to an araction in Las Vegas. Automobile parking would be provided under the buildings to
maximize street-level retail space.
e site is located on a low-lying sliver of land separated from the mainland by a river, whi is spanned by a small twolane bridge, built 70 years earlier to accommodate factory workers. e developer plans to widen the bridge structure to
handle increased traffic.
In the last 5 years, the region was hit by three hurricanes, and the city suffered considerable storm-surge flooding. Lately,
some streets flood at high tide, even on sunny days.
QUESTIONS
Consider the following threats posed by climate ange: rise in sea level, frequency and ferocity of storms, average
atmospheric temperatures and urban heat islands, droughts, flooding, and coastal erosion. Given aracteristics of the project
itself (building size and use, parking, bea, roller coaster, bridge), to whi threats is the project most sensitive? Next,
consider aspects of the project location (coastal, low-lying, southeastern United States). To whi threats is the project most
exposed? Consider both current and future climates, and assume threats will worsen. Use tables similar to Tables 11.6, 11.7,
and 11.8 to rate the project’s sensitivity, exposure, and vulnerability to the threats.
Based on this assessment, would you recommend going forward with the project as currently planned? If not, what
anges would you suggest to increase the project’s resilience to the threats posed by climate ange?
Notes
1. oted in Bernstein P. Against the Gods: The Remarkable Story of Risk. New York: John Wiley & Sons; 1996, p. 331.
2. Asked once to define certainty, John Von Neumann, the principal theorist of mathematical models of uncertainty, answered with an
example: to design a house so it is certain the living room floor never gives way, “calculate the weight of a grand piano with six men
huddling over it to sing, triple the weight,” then design the floor to hold it. at will guarantee certainty! Source: Bernstein. Against the
Gods, p. 233.
3. See Argus R. and Gunderson N. Planning, Performing, and Controlling Projects. Upper Saddle River, NJ: Prentice Hall; 1997, pp. 22-23.
4. Adapted from Miaels J. Technical Risk Management. Upper Saddle River, NJ: Prentice Hall; 1996, pp. 208-250.
5. Turoff M. and Linstone H. (eds). The Delphi Method: Techniques and Applications; 2002, hp://is.njit.edu/pubs/delphibook/
6. e term “likelihood” is sometimes distinguished from “probability.” e laer refers to values based on frequency measures from historical
data, the former to subjective estimates or gut feel. If two of three previous aempts met with success the first time, then ceteris paribus, the
probability of success on the next try is 2/3 or 0.67. Even without numerical data, however, a person with experience can, upon reflection,
come up with a similar estimate that “odds are two to one that it will succeed the first time.” Although one estimate is objective and the
other subjective, that does not imply one is beer than the other. Objective frequency data will not necessarily give a reliable estimate
because a multitude of factors can influence outcomes; a subjective estimate, in contrast, might be reliable because humans oen can do a
prey good job of assimilating lots of factors.
7. Roetzheim W. Structured Computer Project Management. Upper Saddle River, NJ: Prentice Hall; 1988, pp. 23-26; further examples of risk
factors and methods of likelihood quantification are given in Miaels. Technical Risk Management.
8. See Dingle J. Project Management: Orientation for Decision Makers. London: Arnold; 1997.
9. See Gilbreath R. Winning at Project Management: What Works, What Fails, and Why. New York: John Wiley & Sons; 1986.
10. Roetzheim. Structured Computer Project Management, pp. 23-26.
11. Pool R. Beyond Engineering: How Society Shapes Technology. New York: Oxford University Press; 1997, pp. 197-202
12. Kotulak R. Key differences seen in Columbia, Challenger disasters. Chicago Tribune; February 2, 2003, Section 1, p. 5.
13. Pool. Beyond Engineering, pp. 207-214.
14. Miaels. Technical Risk Management, p. 40.
15. Statistics make it easy to depersonalize the consequences. For example, it is less distressing to state that there is a 0.005 likelihood of
someone being killed than to say that 5 people out of 1,000 will be killed.
16. Mitroff I. and Linstone H. The Unbounded Mind. New York: Oxford; 1993, pp. 111-135.
17. Ibid.
18. European Commission. Planning for Adaptation to Climate Change: Guidelines for Municipalities. Life Project No LIFE08 ENV/IT/000436.
N.d, circa 2013, hps://base-adaptation.eu/sites/default/files/306-guidelinesversionefinale20.pdf, accessed May 1, 2019.
19. Tables 11.6-11.8 adapted from European Commission on Climate Action. Climate Change and Major Projects. European Union
Publications Office; 2016, p. 7, hps://ec.europa.eu/clima/sites/clima/files/docs/major_projects_en.pdf, accessed Mar 30, 2019.
20. Eisner H. Computer-Aided Systems Engineering. Upper Saddle River, NJ: Prentice Hall; 1988, p. 335.
21. See Grady J. System Requirements Analysis. New York: McGraw-Hill; 1993, pp. 106-111.
22. Eisner. Computer-Aided Systems Engineering, p. 336.
23. Ibid.
24. A breadboard is a working assembly of components. A prototype is an early working model of a complete system. Both are used to
demonstrate, validate, or prove feasibility of a design concept. Breadboards, prototypes, and modeling are discussed in Chapters 2 and 10.
25. Wakabayashi H. and Cowan B. Vancouver International Airport expansion. PM Network; September 1998: 39-44.
26. Whien N. Meet minimum requirements: Anything more is too mu. PM Network; September 1998: 19.
27. DeMarco T The Deadline. New York: Dorset House; 1997, p. 83; Yourdan E. Rise and Resurrection of the American Programmer. Upper
Saddle River, NJ: Prentice Hall; 1998, pp. 133-136.
28. Dorner D. The Logic of Failure. Reading, MA: Addison-Wesley; 1997, p. 163.
29. Aronstein D. and Piccirillo A. Have Blue and the F117A: Evolution of the Stealth Fighter. Reston, VA: American Institute of Aeronautics
and Astronautics; 1997, pp. 79-80.
30. Ibid., pp. 186-190.
31. For other approaes to risk time analysis, see Miaels. Technical Risk Management.
32. is section and the next address the more general topic of decision analysis, a broad topic that receives only cursory coverage here. A
classic book on the subject is Luce R.D. and Raiffa H. Games and Decisions. New York: John Wiley & Sons; 1957.
33. Adapted from Kharbanda O. and Pinto J. What Made Gertie Gallop: Learning from Project Failures. New York: Van Nostrand Reinhold;
1996, pp. 177-191.
34. Source: Kromhout F. Divisional Director, Bridges, BKS (Pty) Ltd, Pretoria

Place your order
(550 words)

Approximate price: $22

Calculate the price of your order

550 words
We'll send you the first draft for approval by September 11, 2018 at 10:52 AM
Total price:
$26
The price is based on these factors:
Academic level
Number of pages
Urgency
Basic features
  • Free title page and bibliography
  • Unlimited revisions
  • Plagiarism-free guarantee
  • Money-back guarantee
  • 24/7 support
On-demand options
  • Writer’s samples
  • Part-by-part delivery
  • Overnight delivery
  • Copies of used sources
  • Expert Proofreading
Paper format
  • 275 words per page
  • 12 pt Arial/Times New Roman
  • Double line spacing
  • Any citation style (APA, MLA, Chicago/Turabian, Harvard)

Our guarantees

Delivering a high-quality product at a reasonable price is not enough anymore.
That’s why we have developed 5 beneficial guarantees that will make your experience with our service enjoyable, easy, and safe.

Money-back guarantee

You have to be 100% sure of the quality of your product to give a money-back guarantee. This describes us perfectly. Make sure that this guarantee is totally transparent.

Read more

Zero-plagiarism guarantee

Each paper is composed from scratch, according to your instructions. It is then checked by our plagiarism-detection software. There is no gap where plagiarism could squeeze in.

Read more

Free-revision policy

Thanks to our free revisions, there is no way for you to be unsatisfied. We will work on your paper until you are completely happy with the result.

Read more

Privacy policy

Your email is safe, as we store it according to international data protection rules. Your bank details are secure, as we use only reliable payment systems.

Read more

Fair-cooperation guarantee

By sending us your money, you buy the service we provide. Check out our terms and conditions if you prefer business talks to be laid out in official language.

Read more