Business as usual definition in Risk Assessment

Foreseeability and predictability in risk assessments

Season’s Greetings from Riskope

Act of God in probabilistic risk assessment

Holistic Geoethical Slopes’ Portfolio Risk Assessment in Geological Society

The impact of standard of care on dams survivability

Tactical and strategic planning to mitigate divergent events

Comments on KPMG survey about third party risk management

Business as usual definition in risk assessment, as defined in our day-to-day practice, is an unchanging state of affairs. That is, despite the occurrence of non-divergent hazards of any kind (man-made, natural).

Business as usual definition in Risk Assessment

An example of business as usual and non divergent hazard

For instance, the variability of any parameter as considered and specified in the design of a system is “business as usual”. Therefore that variability does not represent a hazard.

For example, the variation of the oil price of ±10% in a project could be considered as “business as usual” if so specified. However, +30% could be a non divergent hazard as it represents a “usual extreme”.

“Usual extremes” are those that have been observed with a return similar or higher than the expected duration of the system’s life or the time horizon selected for a study. At a mine, expected life of twenty years, any extreme phenomenon exhibiting a long-term average frequency of 1/20 or higher is a “usual extreme”. Thus it is a non divergent hazard.  If the selected time horizon for a project is, say, five years, and we look at oil prices, any price variation which has a return of 1/5 or higher is simply a usual extreme.

Of course, the hazard magnitude and frequency, and its consequences are always subject to uncertainties.

Divergent hazards in risk assessment

As defined elsewhere divergent hazards part from long term averages and “usual extremes”. That is both in terms of frequencies and intensities.

For example, a 1/100 rain event repeating itself three times in a short period, say 5 years is a divergent hazard. In first approximation the likelihood goes from:

  • 1/100=1% at the first occurrence to
  • 2/101=2% at the second and finally
  • climbs to 3/105=3%.

Even if the consequences remain constant over the period, the risk is multiplied by 3. However, if repairs are not immediate and therefore the system is weaker at the second and third event, consequences may also increase. Thus the risk could follow an exponential increase. Note that other parameters may also change . Among these land-use, population, environment. Of course, one need to consider all of these changes in the evolution of the risk.

Divergent hazards effects

Divergent hazards generate risks with low predictability and foreseeability.

Let’s consider another example. One can consider a refinery explosion, even if of significant proportion as business as usual. That is because we know these events do occur with a certain frequency. Their damages are foreseeable with a certain degree of uncertainty, especially if one gives due regard to interdependencies.

That refinery explosion could become a divergent event if:

  • it would be unpredictable, because it unexpectedly overcomes all lines of defense, and
  • perhaps also unforeseeable, due to ripple effects on interdependent consequences.

Suppose that divergent event would have a probability lower than credibility, or perhaps hit several operations simultaneously. Then we would enter in the realm of Act of God and Black Swan.

The discussion of terms such as Foreseeability and predictability in risk assessments is rather common. Like usual, at Riskope we like to have an extremely clear glossary, in order to avoid blunders due to miscommunication.

Foreseeability and predictability in risk assessments

Foreseeability and predictability in risk assessments

Foreseeability is the facility to perceive, know in advance, or reasonably anticipate that damage or injury will probably ensue from acts or omissions. A foreseeable event or situation is one that can be known about or guessed before it happens. We apply it to consequences: can we foresee the damage generated by a (predicted) hazard hit based on present or future mitigation and policies/actions?

Predictability is the state of knowing what something is like, when something will happen, etc. We apply it to hazards: can we predict the magnitude and the frequency of a hazard?

A scenario that has low predictability and low foreseeability can easily land in the blind spot quadrant or in the unknown one. Indeed one can decide to declare that scenario as unknown, while “the public sees it”. In that case one is creating a blind spot. Alternatively, if one considers that the public also does not know, then the scenario would land in the unknown quadrant. 

Of course, the role of a good risk approach is to reduce uncertainties.  By doing so the two quadrants reduce to the advantage of the arena and façade ones.

Usual and usual extreme hazards

“Usual” hazards, including their “usual extremes”, are predictable and foreseeable. They have a relative narrow margin of uncertainty, provided interdependencies and systemic amplification are properly considered.

Divergent hazards are unpredictable insofar their frequency (at a given magnitude level) “explodes”, e.g. three hundred years rain events in one year. Their foreseeability may also reduce due to various internal or external reasons. Because “wounded systems” are less robust and consequences may get amplified. Think about massive fires leading then to erosion, slopes failures, etc., in case of rain.

In a blogpost we will soon publish, we will of course discuss “business as usual” from a risk assessment, tactical and strategic planning point of view.

Closing remarks on foreseeability and predictability in risk assessments

In our days and age it is paramount to clarify our technical glossary. The discussion of terms such as Foreseeability and predictability in risk assessments is very important. That is to ensure transparent and unequivocal communications to all stakeholders in projects and operations.

Warmest wishes of the season from Riskope to you! 

We define an Act of God in probabilistic risk assessment as an event with a probability of occurrence below the general consensus for credibility. In other words it is an unbelievable event that is supposed to be unfathomable “God’s will”.

Act of God in probabilistic risk assessment

We can quantify probabilities down to certain frequency levels. As a matter of fact, in our day-to-day practice we consider events probabilities as follows:

  • down to 10-5 as credible,
  • between 10-5 and 10-6 as poorly credible, and finally,
  • below 10-6 as incredible. This the start of the Act of God realm.

Why do we state that? There are several reasons to this. We will discuss them below.

Cosmological extreme value of probabilities

First of all, below the values above we cannot evaluate meaningful uncertainty bands for single events. Indeed, predictions of most events requiring human error frequencies of the order of 10-6 or less are clearly incredible. That is simply because the historical data set is generally non-existent. Let’s use an “extreme” thinking to put things in perspective!

The lower limit

The Big Bang universe creation occurred 1010 years ago. That means the “history of our universe” is about fourteen billion years old. As a result, the creation occurred once in a rounded-up 1010 years period. That means a “frequentist” person could think that the “occurrence rate” is 1/1010 years. Thus, any frequency smaller than 10-10 really means that the event “occurrence rate” is less likely than our universe! This obviously is our “cosmological” extreme lower bound limit of credibility in risk assessments.

However, as risk assessors we have to be more humble than that!

For instance, in the Proceedings of the CSNI Workshop on PSA Applications and Limitations, Santa-Fe, New Mexico, 1990 (page 243) researchers stated it is possible to quantify probabilities with reasonable “certainty” down to annual probabilities levels of 10-6 to 10-7. Indeed, below such values no one can give meaningful uncertainty bands for single events. Therefore, those sources recommend a cut-off frequency of 10-7 for single events. Now we are getting near to what we use!

Credible accidents in the process industry

The process industry and other industries where major accidents/events are a concern define credible accidents. They are those:

  • “which are within the realm of possibility (i.e., probability higher than 10-6/yr) and
  • have a propensity to cause significant damage (at least one fatality)”.

Interestingly, seismic, geological and other geo-sciences oftentimes use the threshold value of 10-5 to define “maximum credible events”. Now we are spot-on!

What to do if your model or simulation delivers very low probabilities

Let’s suppose you use a mechanical model to evaluate probabilities. For instance finite elements or other. In addition, at the end of your evaluation you derive probabilities of failure below say 10-5. Please stop right there! Go back to your uncertainties, review everything.  Even if everything seems perfect, do not report anything lower than 10-5. Likely your model is too perfect to be true.

Remember that, for example, top class hydro-dams have proven to have a probability of failure around 10-5 to 10-6. In addition, class 5+ nuclear accidents do occur at around 10-4/year reactor.

What to do if a consultant or a peer delivers risk assessment with very low probabilities.

There are numerous examples of preposterous statements conveyed verbally or in writing by “experts” that do not understand the limits discussed above.

This happens for both modeled-simulated evaluations as well as semi-empirical ones. Thus, if you use those values, your position will not be a good one. That is especially if you end-up in court in the aftermath of an accident.

If a consultant comes up with such a value, the best advice is to fire him or her before they get you into trouble.

Act of God in probabilistic risk assessment closing remarks

The insurance industry invented the “Act of God” term.

In its original legal usage, an Act of God was a calamitous event outside human control for which no person could be held responsible. As far as we know, there was no relationship between an Act of God and its probability of occurrence. It was in the hand of lawyers to argue.

As you can see, the definition in risk assessment is slightly different. In addition, note that oftentimes, an act of God may be the key to invoke a clause of Force Majeure in contracts whether natural or man-made root causes are present.

We are proud to publish Holistic Geoethical Slopes’ Portfolio Risk Assessment in Geological Society, London, Special Publications, 508.

We want to personally thank Giuseppe Di Capua of IAPG for inviting us.

Holistic Geoethical Slopes' Portfolio Risk Assessment in Geological Society, man-made slopes, hazards, risk assessments, geoethics,

Here is a summary of what we discuss in our paper

Landslides of natural and man-made slopes, including dykes and dams represent hazardous geomorphological processes that generate highly variable risks.

To optimize a slope mitigation approach, one has to combine the probability of failure and the cost of consequences which is by definition the risk that each slope generate.

Any efficient portfolio risk management effort of a slope should be an “Enterprise risk management” tactical and strategic plan, and not only a probability of failure reduction project. Indeed, risks generated by a slope portfolio should also match public acceptance criteria. As a result, risk mitigation from a single slope to portfolios of slopes, has to make sense from both the economic and safety point of view. Independent risk assessors will ensure a drastic reduction of conflict of interest.

Holistic Geoethical Slopes’ Portfolio Risk Assessment in Geological Society

The main question addressed in the paper is: what does rational risk management mean for a single slope or a portfolio of natural or man-made slopes? Furthermore, the paper aims to show how sensible and efficient portfolio risk management can be achieved through Holistic Geoethical approaches which evaluate quantitative risks.

A rational quantitative risk assessment effort should deliver a convergent, scalable, updatable and drillable risk register. The register should transparently include uncertainties while using a clearly defined risk-technical glossary.

The risk register of a rational risk assessment should allow:

  • bottom-up aggregation of risks
  • delivery of the risk information required for decision-making support and finally
  • insurance, risk transfer and other strategic and tactical planning.

The need to simultaneously suppress the silo culture and cognitive biases, within a sustainable Risk Management and ethical environment should be discussed. This becomes very important at regional or national scale, to avoid the usual “Discipline silo’s trap” leading to squandering of mitigative funds. Results coming from traditional data and modern data gathering techniques should be integrated to achieve a sustainable portfolio.

Our ability to live sustainably is linked to the capacity to evaluate voluntary and involuntary risks and establish reasonable tolerances to risk, thus prioritize them and their mitigations in the best possible manner.

Case study

We showcase the Cassas Landslide. The landslide is located in the NW Italian Alps. It impinges on a corridor encompassing main transportation lines, hydro-electrical facilities and a large village.

In order to ethically decrease the risks with limited resources in our context of climate change and large population and land-use changes, it is paramount to be able to compare, transparently and in a repeatable manner the risks generated by different landslides and mitigative alternatives.

As a result, several alternative stabilization techniques were studied, taking into account their:

  • life expectation,
  • maintenance criteria,
  • environmental impact, costs, and finally
  • residual risks.

The approach to deploy a slopes portfolio quantitative screening level risk assessment allows for enhanced and “early” decision making. That is to perform screening level prioritization of slopes and mitigations before engaging in more detailed analyses. Common practice risk matrix results are generally non discriminant and overwhelm management.

Based on the client’s tolerance threshold the analyst performs a ranking based on the intolerable part of risks, This highlights critical areas of the slopes and to guide recommendations on possible mitigations. The set of communication documents allowing to properly inform all the stakeholders on the outcome of the Risk Assessment is displayed as a dashboard.

Risk informed decision display of risks through the ORE dashboard allows to immediately understand the multidimensional effects of potential failure and their mitigation.

Conclusion

Understanding the intolerable risk of each slopes in a portfolio lead to the rational and unambiguous prioritization of mitigations.

Risk mitigation alone and incomplete risk assessments do not help evaluate or control man-made or natural hazards. Indeed, they do not help in managing complex single phenomena or portfolios of slopes.

The risk assessment and risk triaging allow to guide rational and sustainable emergency procedures, preparedness and evacuation plans.

Deploying a Holistic Geoethical Slopes’ Portfolio risk mitigation brings the following benefits:

  • Confidence,
  • clear decision-making and finally
  • powerful leadership.

Developing Holistic Geoethical Slopes’ Portfolio risk mitigation programs is possible. It is desirable and helps healthy debate of difficult public issues.

Using ORE2_Tailings we can quantify the impact of standard of care on dams survivability.

In this blogpost we take three dams, namely Dam x, Dam y and Dam z. Their design was identical with initial factor of safety of 1.3. In addition, they had similar QA/QC, construction method, same systemic approach, efforts and uncertainties consideration.

The impact of standard of care on dams survivability

Various small mishaps hit the dams along their history. Some repairs occurred, under different contracts, different quality control and finally, at different times. It turns out Dam x and Dam y were in rather poor shape, whereas Dam z had an “easier life” at the time of our analysis.

ORE2_Tailings probabilities of failure of the three dams

Due to those differences, when we performed ORE2_Tailings assessment the dams came out with different probabilities of failure, despite the initial identical factor of safety.

Dam # FoS Annual probability of failure
Dam x 1.3 5.90E-03
Dam y 1.3 1.70E-03
Dam z 1.3 2.14E-04

The fact that identical factors of safety may lead to over one order of magnitude difference in terms of probability of failure is not a surprise. Indeed,  we showed that in our 2020 book Tailings Dam Management for the Twenty First Century.

As a result, the client was impressed by these differences and asked us what would be the effect of releasing the operations (inspections, audits, etc.) and monitoring standard of care on these dams.

Effect of releasing the standard of care on the three dams

Thus, we used the results above as base case and we built two scenarios: namely a reduction of 25% and then 50% of the standard of care for operations (inspections, audits, etc.) and monitoring.

The figure below displays the results.

The impact of standard of care on dams survivability

As it can be seen and as expected, the reduction of standard of care increases the probability of failure of each dam. As a matter of fact, the largest the release the higher the probability.

The impact of standard of care changes on dams survivability becomes evident. In addition, if the client wanted, we could even back-calculate the corresponding reduction of the factor of safety. Thus, it is possible to design a level of care that keeps a dam probability of failure within a certain range.

Standard of care reduction vs. benchmarking

It is then very interesting to see how the three dams behave with respect to last century world-wide performance benchmark which is part of ORE2_Tailings.

As the standard of care goes from as is to the 50% reduction:

  • The best of the dams, Dam z, remains in an area which is better than world dam portfolio performance.
  • With 25% reduction Dam y which was also in the “better than world wide” area, becomes an “average dam” with respect to the last century experience. In addition,
  • at 50% reduction of care, Dam y reaches the higher bound of the “average dams” range.
  • Dam x starts in the top half of the average worldwide range. As the standard of care release reaches 25-50% the dam pops up in the range of “worse than the world portfolio” area. Because of that shift, the position of the owner may become difficult to defend within the frame of the Global Industry Standard on Tailings Management (Global Industry Standard on Tailings Management – Global Tailings Review).

The impact of standard of care on dams survivability

 The impact of standard of care on dams survivability can be evaluated quantitatively. It is multifaceted because it increases the probability of failure and changes the benchmarking of any structure.

Of course, changing the probability of failure alters the risks generated by a dam.

Now you can really decide if it is worth not rebuilding that broken inclinometer or embarking in that new monitoring campaign.

This is an illustration of risk-informed decision at its best. As a result we get more transparency, more clarity and finally more awareness of the effects of our decisions.

Tactical and strategic planning to mitigate divergent events is one of the themes of our next book. The term divergent does not yet appear in our glossary, as we are preparing its fourth edition. Stay tuned for the announcement of its publication.

Tactical and strategic planning to mitigate divergent events

In short, hazards or exposures become divergent when they part from long term averages and “usual extremes”, both in terms of frequencies and/or magnitude. For example, a hundred-year rain event that occurs three times in a short interval is a divergent event until new long-term behavior is established.

Why are we talking about divergent risks tactical and strategic planning?

Recent disasters all over the world, such as large-scale fires, flooding, hail and dust storms, locusts and epidemics have demonstrated the need to enhance tactical and strategic planning. There is also a blatant need to foster healthy awareness and reduce fears and panic reactions at all levels of our societies.

Climatic divergence from long term averages and “usual extremes”, both in terms of frequencies and intensities, reinforces this need due to a new wave of Green Deals, the Paris agreement and other efforts. Key players around the world require more and more careful evaluations and risk informed decision-making.

The goal of divergent risk informed decision-making is to allow systems’ healthy and ethical operational, tactical and strategic planning. That planning must be based on sensible estimates of risks of various origin and significance. Systems are defined as sets of elements working together and/or interconnected geared toward accomplishing a set of goals, objectives. System exist in any discipline, area or business, from administration, to industry, environment and finally society.

Tactical and strategic planning to mitigate divergent events

Consider Tactical and strategic planning to mitigate divergent events as a medicine to help:

  • containing fear and knee-jerk reactions,
  • reduce blunders, and finally
  • enhance value building and ethics, while planning for climate change and other “runaway” hazards.

To attain these goals, it is paramount to foster predictability and foreseeability of divergent hazards.

If you are not familiar with the terms predictability and foreseeability, do not worry. We will define them soon in detail. In addition they will be part of the fourth edition of our glossary.

Closing remarks

Divergent hazards are “present” today and will always be lurking in our future. Perhaps they will appear under the form of climate change events, potential meteorites/asteroid collisions, sun flares (magnetic pulse), pandemics or super-volcanoes.

We just read KPMG’s Third Party Risk Management outlook 2020 and today we will pitch in comments on KPMG survey on third party risk management .

Comments on KPMG survey about third party risk management

Risk integration

We discuss what its conclusions mean in terms of practical risk assessment and Enterprise Risk Management (ERM). At Riskope we started integrating third parties risks in ERMs and risk assessments twenty years ago. We note that in the grand scheme of things third party may also mean neighbors. Of course, defining the limits of the system to be assessed is as important as its description.

Indeed, ERMs need to be convergent, i.e. bring in 360-view of the system hazards including suppliers, subcontractors and other third parties. This brings us to always propose a multi-dimensional view of consequences which includes reputation and crisis potential. We also stress the need to review contractual Force Majeure clauses due to the very dynamic world we live in.

So, we were delighted to read in the KPMG report that a vast majority of the interviewed stated that business’ reputation directly links to performance. This mean that ERM must integrate the reputational dimension. Additionally, it shows that our efforts go in a direction the markets are starting to recognize. Allow us to state again, reputation is a dimension of the consequences additive function, not a standalone item.

Consistency consistency consistency!

The KPMG report calls for consistency across the enterprise. This means again that the the ERM must be convergent in order to bypass the existing information silos. Indeed, once again it, is also in our experience that companies oftentimes mitigate their risks and prioritize them using siloed approaches. Those lead to money squandering. Examples of this can be cyber security or supply chain . Tools exist and we have been using them for long time (see Chapter 8).

If companies address the previous points, then the claim that “half of businesses (50%) do not have sufficient capabilities in-house to manage all the risks they face” melts away. That claim drops as it is a result, and not a cause, of a poorly structured ERM approach and implementation.

Comments on KPMG survey about third party risk management

Indeed firms can achieve both efficiency and effectiveness in their tactical and strategic planning. The key is to use risk informed decision making or, as KPMG calls it, taking a risk-based approach. And in terms of practical risk assessment we can do this by prioritizing risks from the highest intolerable risks.

Riskope Blog latests posts

  • Business as usual definition in Risk Assessment
  • 13-01-2021
  • Business as usual definition in risk assessment, as defined in our day-to-day practice, is an unchanging state of affairs. That…
  • Read More
  • Foreseeability and predictability in risk assessments
  • 6-01-2021
  • The discussion of terms such as Foreseeability and predictability in risk assessments is rather common. Like usual, at Riskope we…
  • Read More
  • Season’s Greetings from Riskope
  • 16-12-2020
  • Warmest wishes of the season from Riskope to you! 
  • Read More
  • Get in Touch
  • Learn more about our services by contacting us today
  • t +1 604-341-4485
  • +39 347-700-7420

Hosted and powered by WR London.