view counter

Could the Sydney siege have been predicted and prevented?

The trouble with experts
There are good reasons for not relying solely on experts and instead relying on formal (actuarial) models that combine data to make predictions for us.

First, people are prone to bias in their judgments, and one of the best known and aptly illustrated biases is the hindsight bias. This is the tendency to overestimate the probability that you would have correctly predicted an event after that event has occurred.

This bias can lead us to become overconfident in our ability to predict the outcome of events. It stops us from learning what the useful indicators are that we should pay attention to that might lead to accurately predicting an outcome.

Second, expertise is no guarantee of prediction accuracy. U.S. psychologist Paul E. Meehl reviewed twenty studies that compared clinical judgments of psychiatrists and psychologists with a regression model (a statistical model that combines predictor variables to find the best combination for predicting an outcome variable).

There was not a single study in which the clinician outperformed the statistical model in making predictions.

Further studies of psychiatrists and psychologists in a psychiatric facility trying to predict the dangerousness of forty newly admitted male patients showed similarly poor results.

Clinicians had a predictive ability accounting of 12 percent of the data, compared with a predictive ability of 82 percent for a linear regression model using the same information.

So can statistics predict a crime?
Results like these have led to large efforts to develop and validate actuarial (statistical) methods for predicting violence.

One of the most comprehensive and well regarded approaches is the Classification of Violence Risk (COVR). This uses statistical methods to classify people into five risk groups (ranging from very low risk to very high risk).

This approach was developed for use in clinical populations and so may well be of little value for predicting violence in the general population. It does at least provide a set of criteria for assessment and a formal model.

But is it accurate? The proponents of the approach state that it is, but others have pointed to a need to understand the margins of error. Further, there is a debate about the procedures used to compare the accuracy of these methods.

But prediction is hard, especially when there is a very low incidence of the event that we are trying to predict.

In 1955, Meehl and colleague Albert Rosen stated a condition under which a diagnostic test would be efficient can be defined as a situation where prediction by the diagnostic test was better than prediction using only the raw base rates.

By raw base rates we mean the rate at which the thing we are trying to predict occurs in the population. For violent gun deaths in Australia this is thankfully rare; about 0.2 per 100,000 residents. The rate may be even less if we account for events involving people with mental illness.

At present there are no psychometric instruments that consistently pass the criterion of efficiency with a base rate as low as this.

Further, we need to be very careful about stereotyping the mentally ill as potentially “dangerous.” It is simply not the case that all people with serious mental illnesses are prone to violence.

There are very specific factors that govern the complex relationship between mental illness and violence. We need to understand and prevent people from experiencing them.

Predictive policing
The consequences of using prediction to prevent crime are explored in the 2002 movie Minority Report.

Carolyn Semmler is Senior Lecturer in Psychology at University of Adelaid. This story is published courtesy of The Conversation (under Creative Commons-Attribution/No derivatives).

view counter
view counter