# Probabilities inconsistencies in a Risk Management Framework

May 29th, 2019

Can you detect Probabilities inconsistencies in a Risk Management Framework ?

While performing a Risk Management Functions Audit we were asked to review the following Probability-Annual frequency categories.

**Cat** |
**Probability ** |
**Annual Frequency** |

5 |
>95% |
More than once per year |

4 |
50 to 94% |
At least once every other yr |

3 |
20 to 49% |
Once every 2 to 5 years |

2 |
5 to 19% |
Every 5 to 20 years |

1 |
<5% |
Less than once every 20 yrs |

The values of the two columns are inconsistent. A probability of 95% does not correspond, for example to “more than once per year”, and so on.

Simple mathematics (Poisson distribution) help fixing those inconsistencies. Poisson links the frequency of an event (given number of events occurring in a fixed interval of time with a known constant rate and independently of the time since the last event) with the probability of seeing one or more event. Thus we can rewrite the annual frequencies them as follows:

**Cat** |
**How the annual frequency should be written to match the probabilities of Table above** |

5 |
more than 3 per year |

4 |
~3 events over 4 years to 3 event per year |

3 |
once every 5 years to 3 events over 4 years |

2 |
once every 20 years to once every 5 years |

1 |
less than once every 20 years |

If one considers the annual frequency column as in the table under review (the first in this blogpost), the probability column should be re-written and state:

**Cat** |
**How the Probability should be written match the annual frequency in the client’s Table** |

5 |
>63% |

4 |
39 to 63% |

3 |
18 to 39% |

2 |
5 to 18 % |

1 |
less than 5 % |

## Probabilities inconsistencies in a Risk Management Framework

Whatever corrective the client may select, we can clearly see the effect of “compression range” deriving from “binning” into categories toward the upper and lower end of the ranges.

Indeed, there is a full array of mishaps that can occur above 63% (or more than 3 times a year). Furthermore there is a very large number of industrial and mining accidents that would show-up in category 1. However they represent very different risks.

For example, tailings dams have a probability of failure in the order of 1/1,000 to 1/10,000. Those values depend, of course, on a multitude of parameters. A tank of acid has likely a probability of spilling of 1/100. That value again depends on the make, position, etc…, and so on.

Grouping all of these into “Category 1” is misleading. It can lead to significant waste of mitigative capital and/or undesired and undetected exposures.

## Consequences inconsistencies in a Risk Management Framework

Obviously, consequence scales in many risk assessments presents similar problems.

For instance, in almost all risk assessments we review, users consider only one category at a time to qualify risks, ignoring the fact that consequences (dimensions) are additive.

## Risk matrices and tolerance inconsistencies in a Risk Management Framework

We will now discuss the inconsistencies brought in by the “risk matrix” (FMEA, PIG) coloring scheme. Indeed, the coloring leads to some disconcerting results that can bring waste of mitigative funds or unwanted exposures. The matrix below has five likelihood and five consequences classes ranging from 1 to 5. The numbers in the boxes represent the number of risks binned in each cell. This client uses four colors. Green is the most benign, red the most significant. light blue and yellow are “in the middle”, difficult to differentiate.

Consider for example Fukushima. The accident would rate as likelihood=1 consequence=5, thus “yellow” or risk=5. Next year Fukushima’s manager of operations seasonal cold would rate likelihood=5, consequence=1, thus “light blue” but again risk=5…. Now imagine decision making would occur based on such a misleading risk assessment! Some would argue the colors are different, but the risks numbers are identical! Others would say that a rare catastrophic event is worse than a almost certain minor consequence event. There would be no rational way to end this discussion. Sadly, that is what happens in industries of various kind around the world. That’s one of the causes of major management blunders and catastrophic accidents.

## ORE use for Risk Informed Decision Making (RIDM)

These are some of the reasons why in ORE we have suppressed the need to use categories and use the estimated probability and consequences without binning them, then compare risks to explicit risk tolerance (corporate and societal).

Tagged with: Consequences inconsistencies, Probabilities inconsistencies, Risk Management Framework

Category: Consequences, Probabilities, Probability Impact Graphs, Risk analysis, Risk management

## Leave a Reply