Skip to main content

It is such a fashionable news headline: “One in five Australians …has a mental illness, is in the wrong job, believe they will work beyond the age of 70, experiences racism at sports events, has some form of disability, suffers from chronic pain, waited longer than they thought was acceptable to get into a GP, doesn’t believe in climate change, suffers from allergies and admits to drink-driving.

However, without a social context, this national statistic is largely meaningless.

The headline, “One in five Australians drink and drive: study”, was compelling. Surely such a statistic highlighted the need for further action on such a dangerous practice. However, this so-called “study” – conducted by a polling agency – involved just 1500 people completing an online survey. The small sample was then reported as representative of all Australians.

Researchers often use convenience samples – people who are easy to reach. The advantages of this type of sampling are the availability and the speed with which data can be gathered. However, the risk is that the sample might not be representative of all Australians.

Surveys require people to tick a box or mark a five-point Likert scale. By asking respondents to answer “yes” or “no” or “most of the time”, “some of the time”, “never”, surveys invariably gloss over the complexities within social experiences. Consider the question: “Have you ever driven a car after drinking?” How many drinks were required to tick “yes”. Is a glass of wine with a meal sufficient? Or do I need to be off my face?

A simplistic question invariably receives a simplistic answer. But with the aid of computer software, a simplistic answer is given power. People listen to numbers. A number such as “20 per cent of Australians” has media currency, irrespective of the size of the sample or the methods used to collect the information.

It is sometimes difficult to differentiate credible social research from the rest. Few of us have the time, or indeed the expertise, to go back to the original data and see the flaws in the research design, the misrepresentation of the data and the over-simplification of the findings. If we did, we would see that simplistic statistics are used as a political tool and a marketing strategy.

Increasingly participants are being recruited via websites with pop-up surveys. A fun pop-up survey is then transformed into rigorous social research via a sexy media release. Researchers are encouraged to simplify their research findings to grab media attention, and rigorous research is often misrepresented when it is reduced to click bait.

Twenty years ago, the Australian Bureau of Statistics conducted the first national survey of mental health and wellbeing of adults. Some 10,000 Australians completed a household survey. A media release announced that one in five Australians suffered from a mental illness. This national statistic has become a well known “fact”, seen on billboards, media releases and websites.

A breakdown of the statistic indicated a range of disorders, each with different prevalence rates. In the media release disorders such as anxiety disorders, substance abuse and depression all merged into a homogenous “mental illness”.

 

Although measuring mental illness via a household survey is an inexact science, the survey generated colourful bar graphs that gave the impression of certainty. These statistics indicated that only 1 per cent of Australians with a mental illness were admitted to hospital. Was this because their illnesses were not serious enough to warrant hospitalisation? Or was there a lack of hospital beds? Without knowing the stories behind the statistics, it was not possible to interpret the data in a meaningful way.

The “one in five Australians” statistic is a reminder that the numbers never speak for themselves. There are often complex stories behind the numbers. This complexity cannot be reduced to a percentile or a bell curve. Our lives are simply too messy.

 

Leave a Reply