- LATEST BLOG POST
- echo $post_date ?>
- Johari window application in risk management is an exploration of possible corporate application of a therapeutic tool developed back in…
- Read More
Lately we have been busy working on a mining related comprehensive risk assessment. Integrating traditional knowledge in risk assessments proved to be a significant skill as public records and other usually available data were almost non-existent.
We use the adjective “comprehensive” for risk assessments covering multi hazards (earthquake, flood, man-made, etc…) and a variety of targets such as population, environment, as well as supply chain network elements.
When performing a comprehensive risk assessment it is fairly common to have various information sources for data:
They all come with their own level of credibility/uncertainty.
In the case cited above, for example, the geologist in charge of the project assessed potential for rockfalls impinging on the access road and characterized each source with a with likelihood-magnitude relationship.
We also conducted interviews with the local population who had extensive life-experience with the specific locations. They provided us with anecdotal descriptions associated with magnitudes.
Of course, delivering likelihood estimates is a daunting task for an untrained individual, so we spent lots of effort to encode these anecdotes into a likelihood-magnitude witness-based (empirical) relationship. The similarity with integrating big data and thick data to deliver a “complete view” of local realities or processes is striking: one source could be extremely scientific and claim (rightly so or not…) precision based on observation, while missing the more complex story transmitted generation after generation by local inhabitants.
So, the questions is: how do we beneficially integrate all available information in a coherent unified way to conduct a comprehensive risk assessment? Is Integrating traditional knowledge in risk assessments truly beneficial?
As we are working probabilistically the approach to integration we have advanced over the years is to assign a “degree of credibility” to the variables of any problem (in the example above, for example, the likelihood of a rockfall and its magnitude). The “degree of credibility” increases (or decreases) the uncertainty range of the variable. It is a sort of “a priori Bayesian estimate” which is based on empirical distributions of the variables. We never assume a specific distribution type (Gauss, log-normal, etc.) as assuming a distribution can be in itself a hazardous assumption.
In our practice we see many problems that would require probabilistic treatment actually being dealt deterministically only because people feel probabilities are “too mathematical” and data are “too imprecise” to start with. That’s where huge mistakes occur: probabilities allow us to consider the various sources of uncertainty and evaluate their impact on the big picture. Thus we are of the opinion that even rudimentary probabilistic analysis is better than working deterministically and the inclusion of uncertainties far superior to “artificial” parametric studies (like, for example, varying one or two parameters at a time, to see their influence on the overall results).
Unreasonably precise probabilities are to be avoided at all cost. All parameters should be delivered with a range and an expression of uncertainty (various options are available).
Likelihoods lower than 10-6 per year are to be considered preposterous as soon as any uncertainty is present, which is always the case, in any industry, anywhere in the world. 10-6 to 10-5 are actually considered as the limits of credibility in many disciplines.
By including uncertainties and Integrating traditional knowledge in risk assessments we dare say that we are generally right as opposed to precisely wrong, and we consider that as a major benefit, especially when dealing with the public.