# Easy way to defining probability of a fire in a residential area of Vancouver

Nov 4th, 2009

## Easy way to defining probability

Countless times we have heard “…but we have no statistics!” or “statistics cannot be gathered for this specific topic!”, with the usual conclusion that a proper risk assessment cannot be performed because of the “lack of statistics”.

We think this is a good excuse to avoid looking at reality with a better eye, and, unfortunately the result of Statistics and Probabilities being taught usually at the same times in schools.

**Probabilities and Statistics**

Probabilities and statistics are two separate sciences, but people tend to forget that!

Lack of data, expensive research, inability to gather numbers etc… should never be a barrier to do a proper quantitative risk analysis, especially since such a study requires ranges of values or orders of magnitude of the probabilities, not absolute, unique “fatally wrong” numbers.

Years ago, in the Book“Improving Sustainability through Reasonable Risk & Crisis Management” (F. Oboni & C. Oboni ISBN 978-0-9784462-0-8) we introduced a methodology geared towards defining probabilities DIY based on judgemental/empirical expertise/knowledge.

The following micro-case-study will show how good guided thinking can deliver an evaluation of the range of probability of an event without using statistics. Then we will use statistics ( we purposely selected a case study where statistics do exist) to derive the “precise magic number” and compare it with the evaluation.

**Why are we doing this?**

Because we want to show you that Risk Based Decision Making is not only available to monster global companies, but can be used by everyone, including individuals like you and me..

** Fire Hazard in Kitsilano, Vancouver**

The other morning I was walking to get my coffee and saw yet another firetruck and an ambulance next to a burning house. That made me wonder what is the probability of a fire in Vancouver, actually in my neighborhood, or a square of let’s say 1km by 1km.

** Using the book methodology**

I have witnessed at least a serious fire every year for the last 3 years, which leads to the VH category of probabilities, but because of the wide spread fire alarm program and construction codes the houses are in a state that can be defined as G to F despite most structures are wooden and flammable materials are present.

Thus, following the Methodology the probability of occurrence of a fire in my neighborhood next year can be estimated at value between 1.5×10-1 and 2.5×10-1 (one chance in seven to one chance in four for a fire to occur).

** Using National Statistics**

Fire Injury per 100,000 Population in Canada in 1999 was 5.

Population in metropolitan Vancouver in 1999 was 558,138.

Metropolitan Vancouver has a surface of 114.67 km^{2}

Population density can therefore be derived at 4870 hab/km^{2}

We can thus compute the frequency of a fire injury per year per km^{2} as 0.24 in Vancouver.

I can now compute the probability of having a fire injury in my neighborhood next year ( I will not enter into the details of the calculation, as it would scare a few, but if you are interested just ask) at p=0.19 (or appx. one in five).

So, the book methodology to not only lead the right order of magnitude but a safe evaluated range of the estimation in a couple minutes of work, and with no statistics.

Needless to remind that broad estimation of the range of probabilities are what we need to perform risk assessments and support risk based decision making. Using ranges is safer than using falsely precise numbers, and estimating risks is better than walking blindly!

Tagged with: assessment, based, book, decision, making, management, risk, support, tolerability

Category: Hazard, Probabilities, Risk analysis

Interesting, you claim there are no statistics, yet:

The statistic of 5 fires per 100,000 population is cited.

The statistic of 515K people in Vancouver is cited.

The statistic of 114KM2 area for Vancouver is cited.

The statistic of population density of Vancouver is 4870/km2

Then the last, the statistic of probability of the event occuring is 0.19

So When you remove all of those statistics, what calculation is there left to make?

Dear Stephen,

First of all, thank you for commenting our post! It is people like you that make blogs interesting and lively.

We feel like it would be useful to start with a couple definitions:

Statistics: A branch of mathematics that describes and reasons from numerical observations; or descriptive measures of a sample.

Probability: A branch of mathematics that measures the likelihood that an event will occur. Probabilities are expressed as numbers between 0 and 1. The probability of an impossible event is 0, while an event that is certain to occur has a probability of 1.

Thus some of the items you quote are not “statistics”, but “simple” data (surface of the city, number of inhabitants)…nevertheless, let’s not get lost in semantics…!

The aim of our discourse is precisely to show that Probabilities (i.e. the likelyhood of an event to occur) can be derived, by using the appropriate tools, even if only some observations are available (as opposed to “complete” statistics).

We purposedly chose an example where “complete” statistics were available to show that the result we are after, i.e. the likelyhood of seeing a fire next year in our neighborhood, yielded by a very simplified approach, with no or very poor statistics at hand, is very similar to the likelyhood obtained by using “complete” available statistics.

At the end of the day what we want to demonstrate is that it is possible to make risk assessments even if complete statistics are NOT available.

And why are we so adamant in doing this demonstration? Because we believe it’s a pity that risk assessments are not performed with the excuse that “complete statistics” are not available.

It’s like people would prefer to go around blindfolded, rather the wearing shades with a color they do not like!

Please keep discussing/commenting all our posts!

Kind regards

Cesar & Franco

Cesar and Franco – your micro example is interesting for its logic and simplicity but the outcome is not hugely helpful in my view. A one km square contains a lot of houses, each with its unique set of risks. Those risks are largely the difficult to assess type i.e. the varying attitude of the people involved towards fire safety.

Large companies do risk analysis but they do not appear to address the real areas of risk which are those resulting from human factors (irrational, unpredictable, illogical, unreasonable, counter-intuitive etc…) This is because these risks are very hard to “read” correctly – with or without statistics, they appear to be external to the organisation and they are usually presumed to be non-amenable (Government policies, international actors and factors, natural events, crime and worse…)

The risk analysis that is completed thus impacts only internal issues. The result is a health and safety regieme that smetimes borders on the completely fatuous, protective security that provides lots of local employment but is largely ineffective. The result is a company culture with an appetite for risk that is non-existant. These companies face the Chinese as competition in most parts of the world these days. They need all the help they can get to compete, not on the same terms but by changing the game when it comes to risk, creating fleeting opportunities and just being more agile.

If you can help by injecting some common sense and simplicity to risk analysis, management and exploitation (that last bit requires leadership and courage) then your ideas are certainly appealing.

Thank you for your positive closing statement: indeed our mission is to bring some sanity and good sense in areas where sometimes people tend to slide into excessive details and paralysis by analysis.

In our courses we always say that one of the challenges for a risk manager is to find the proper level of simplification (or zooming lenses, if you allow this metaphoric language). So, if a risk assessment is developed for example at “country scale”, like we did for humanitarian land-mines clearing, the of course one cannot afford the same level of detail possible when studying one single family dwelling.

Kindest regards

Franco

It seems people are always knockcing stochastic methods for not being personalized enough. But they are by nature abstractions on reality, taking many specifics and making a more general picture or estimation.

Not to say we shouldn’t pay attention to specific cases, just that this is not what these tools are meant to accomplish.

This is an intersting topic of discussion. And while I understand that there are intellectual historical lines drawn between probability and statistics. I don’t think it is merely an educational system that blurs them, but the fact that they will be used hand in hand the vast majority of the time.

I think I see what your saying, and I will try to restate it. That being short extensive data, boiled down into our comforting statistics- like a McDonalds their always the same- we can use more, or perhaps just less obviously applicable, general data to make estimations and predictions.

And perhaps it would be better to remember that we have other tools besides our standard statistical methods available to is.

Carl, thanks for that. I am taking the perspective of a senior manager who asks for a risk assessment and gets an answer that he could have arrived at through common sense. The “flaw” of averages often contributes to a risk that is somewhat obvious (The drunk weaving down the road, on average will follow the white line in the middle and he will also be killed by a car – for near certain)

However, I agree that any system which can take what appears to be random data and indicate a slightly greater risk in one area rather than another, can be very helpful in allowing the decison-maker to apply scarce resources more appropriately.

My background is military intelligence, where you had to have the courage to challenge a commander’s intent where there was well-argued intelligence to support the argument and the confidence to define risks and opportunities

Carl, thanks for that. I am taking the perspective of a senior manager who asks for a risk assessment and gets an answer that he could have arrived at through common sense. The “flaw” of averages often contributes to a risk that is somewhat obvious (The drunk weaving down the road, on average will follow the white line in the middle and he will also be killed by a car – for near certain)

However, I agree that any system which can take what appears to be random data and indicate a slightly greater risk in one area rather than another, can be very helpful in allowing the decison-maker to apply scarce resources more appropriately.

My background is military intelligence, where you had to have the courage to challenge a commander’s intent where there was intelligence to support the argument and the confidence to define risks and opportunities long before they become obvious.

[…] We wrote that a priori Estimates of the probability of the disservice to happen can be derived, in terms of “framing ranges” by using simplified approaches from our book. An example has also been published (See Example). […]

[…] the way, if you look here you will see how probabilities of events can be estimated for the sake of a risk assessment, even […]