I was very surprised to read this morning that an Italian court had convicted seven scientists of failing to provide a warning of an earthquake that tragically killed more than 300 people. *See *“Italy Orders Jail Terms for 7 Who Didn’t Warn of Deadly Earthquake” New York Times (Oct. 23, 2012). I think that the story illustrates some important, but often overlooked, aspects of risk.

First, risk and probability are not the same. Probability (P) is the likelihood that some event will occur in the future (*e.g.,* a tossed coin will land “heads”). It is usually expressed as a number zero (no likelihood), 1 (certainty), or some number in between. Conversely, the likelihood of an event not occurring is 1-P. If the probability of a coin landing heads is .4, then the probability of it not landing heads is .6. The real trick is how to determine what the probability of an event is.

A frequentist approach to probability would be to toss the coin numerous times and record the results. For the frequentist, probability exists objectively and externally. The problem with this approach is that it is often very difficult to effect in the “real world”. After all, the Italian scientists couldn’t set up the same conditions multiple times and count the number of earthquakes. The same problem exists for companies as they attempt to predict their own future financial performance.

The eighteenth century mathemetician Thomas Bayes had another approach to probability, which is referred to as the subjectivist approach. Bayes held that the probability of an event is the degree of belief that a person has that it will occur on the basis of known information. The subjectivist therefore views probability as the expression of an inner state – how certain one feels about whether something will happen, or not.

People, of course, make all kinds of mistakes when it comes to assessing probability. For example, they often assign higher probability to the occurrence of an entire event than the particular elements of that event. Also, people can be expected to assign a higher probability that an event will occur if they are told that it has occurred.

What then is risk? Risk is usually defined as the probability (P) of an event multiplied by the consequence (x). If, for example, you have a 10% probability of winning $100, your risk is $10 (P*X). This illustrates another important point about risk – it can involve positive as well as negative outcomes. In making financial decisions, I believe that it is usually most appropriate to view risk as the probability over the entire range of outcomes and not simply a single outcome. *See *“To the SEC, Risk is a Four Letter Word“.

Finally, probabilities, despite what the Securities and Exchange Commission might believe, are neither reasonable nor unreasonable. They are simply numbers. Whether it is reasonable for someone to accept a risk will depend upon that person’s particular circumstances.

It is entirely possible that the scientist properly evaluated the risk (probability multiplied outcome), but that wouldn’t be the end of the analysis. There is also the risk of a false alarm (people could be hurt in evacuating the area, economic losses could be incurred). In weighing these risks, the scientists may have properly assessed the risk of a false alarm to be greater than the risk of no warning. However, very unlikely events happen all the time. Just because they do, doesn’t mean that someone has erred in determining their probability or properly weighed the risks.