Risk assessment and automation bias
Jan 27th, 2021
In these days of AI, IoT and other technological developments Risk assessment and automation bias have to be discussed.

The definition of Automation bias covers the propensity users have to favor suggestions from automated decision-making systems. Victims of automation bias will ignore contradictory information made without automation, even if it is correct.
Automation bias is the last of a long dynasty
Automation bias is heir to a dynasty of biases based on the same principle, but seemingly very different one from another, for instance, and in what we believe may be a sort chronological order:
- trusting a vendor because the price is written,
- blindly following instructions or orders, even if they appear grotesque,
- trusting the results of a software, without checking if they make sense,
- following GPS instructions even if the area they make us enter looks shady, and finally,
- accepting the conclusions of a AI, IoT or Space Observation program as good to go without any critical cross-check analyses.
We will discuss how Risk assessment and automation bias may lead their users to inappropriate decisions.
Risk assessment and automation bias
ISO31000 and other norms are extremely clear, Risk assessments must first state the context and the conditions of validity of the evaluations. Those two statements clarify the reasons behind the assessment and the contour of its valid. Neither should ever be forgotten and any user of the assessment results should always refer to them.
Many biases afflict Risk assessments especially if the two statements are not precise enough and transparent.
In our more than thirty years of quantitative risk assessments practice we have seen a number of flawed risk assessments due to that lack of clarity and biases. That is why we offer risk assessments audits as a service and why we have written so much on the subject. We have created a list of what leads to wrong assessment of risks, we summarize below.
- Uncritical acceptance of step by step modus operandi rather than attention to the factual environment/situation development. This fallacy leads to prioritizing risks with what is urgent rather than what is important.
- Accepting the extant reports as face values, unawareness of conditions noted in the model that they use for the decision making.
- Unfamiliarity and/or ignorance of the context.
- Outdated reports that do not reflect on the changes of the context or situation.
Tagged with: AI, Automation bias, IoT, Risk Assessment
Category: Consequences, Probabilities, Risk analysis, Risk management, Uncategorized
Leave a Reply