Recently I wrote an article that suggested that decision makers need a greater knowledge of mathematics to effectively operate in today’s more data driven environments. In this article I want to focus on one aspect of mathematics that I believe to be sorely lacking in the business world, and which can lead to seriously erroneous decisions being made on a regular basis: the concept of *significance* when studying data.

It’s quite possible that readers of this will already deeply understand the concept of statistical significance. But based on experience I suspect quite a large proportion of readers will not be able to make this claim. So I want to introduce the idea of significance in a way that I hope is both intuitive and useful. I also hope it will lead to people asking about significance a lot more than they may do currently, and pausing on jumping to conclusions without proper significance testing of analytic results.

### Tossing a coin

The best way to start developing an understanding of significance is to look at the idea of tossing a coin. This is an easy thing to understand because a coin toss has only one of two outcomes: head or tails. You can imagine that a fair or normal coin has equal probability of tossing a head or a tail. But a biased or loaded coin may not. For example it could be one of those trick coins that have two heads.

So let’s assume we have a coin which we are told has heads on at least one side, and we want to determine if the coin is in fact double-headed. Someone will toss this coin for us as many times as we wish, and will always be truthful about the outcome.

The coin is tossed once and the result is heads. If the coin is fair, the probability of that happening by chance is one in two. How comfortable are you with these two statements?

- The coin was tossed once and the outcome was heads.
- The coin is biased

Hopefully you are 100% comfortable with statement 1, and not very comfortable with statement 2. Good. Now lets toss the same coin a second time. It’s heads again. The probability of two heads in a row occurring by chance is 1 in 4. Maybe you are now a tiny bit more inclined to agree with statement 2. Maybe not.

Lets say the coin is tossed ten times, and all ten times the result was heads. The probability of this happening by chance is less than 1 in 1000. Most people would now feel much more confident to make statement 2 above.

### What is significance?

Significance as a concept deals fundamentally with three truths:

- A: The world around us is uncertain
- B: You can never test things enough to be 100% certain about a conclusion
- C: Decisions still need to made in life despite A and B

In the case of a coin toss, no matter how many times you toss the coin and continually get heads, there will always be a probability that the coin is fair, however miniscule that probability (A). But you can’t toss the coin forever or you will die before you make a decision (B), so you need to determine a point at which you can state with confidence that the coin is biased (C).

Everyone has a different sense of certainty. I might need to see the coin tossed ten times before I am willing to stick my head out and claim the coin is biased, but my friend might be happy with three coin tosses. To deal with these natural human differences, a standard for certainty was established by statisticians over the centuries. There are a number of elements of this standard, but the most quoted one is the idea that to claim significance, the probability of the phenomenon happening by chance needs to be less than 0.05 or 1 in 20. So in our coin toss example, to make a claim of statistical significance, you would need a minimum of 5 heads in a row to reach the prescribed level of certainty.

### What does this mean for my work?

So let’s say a diligent analyst comes to you and shows you an analysis they did. In this analysis, the analyst has looked at all the staff in your organization, split them between men and women, and showed the following:

- 20 out of 100 women left the company in the past 12 months — a 20% departure rate
- 30 out of 200 men left the company in the same time period — a 15% departure rate

Now you are going to put a heading on a chart that shows this analysis. You have a choice of two headings:

- In the past 12 months in our organization, the departure rate for women has been 5 percentage points higher than the departure rate for men
- Women are more likely to leave our company than men

I hope that you are immediately thinking of the coin toss example. Heading 1 is a fine heading and you can write it down immediately. But you can’t write heading 2 down without seeing the result of a significance test. If you were to do such a test, you would discover that the probability of these proportions happening by chance (known as a p-value) is approximately 0.27, nowhere near low enough to confidently make claim number 2.

This is important because Heading 2 is a very serious statement. It could result in substantial effort and investment being made to address a perceived problem. So it is incumbent on you and the analyst to go back and get more data so that you can support the statement to the correct statistical standard.

### When should I ask about significance?

You should ask about significance whenever a limited, finite dataset is being used to back up a general statement. This can play out in numerous ways. Like the example above, one or more groups may be being compared on a specific measure to make a claim that certain groups are different to others. Or a specific relationship may be being claimed using data about correlation. Different statistical tests exist for different situations, and you should have someone available that knows about these and can apply them.

Significance often depends on the size of the difference observed and the size of the population it was observed in. In our women versus men example above, the difference would need to be larger on the same size populations to claim significance, or the difference would need to be the same but on a larger population. This should guide your instinct when you look at data.

But I know from extensive experience that instincts aren’t always right, so my advice is:* always test and never make conclusions without testing*. Here are my biggest tips to make significance part of your analytics culture:

- Get in the habit of having footnotes that include p-values
- Never write general statements about analytic conclusions without having a p-value available which supports a claim of significance
- Don’t be afraid to be pedantic with colleagues about this. Lead by example!

Significance is not a complex concept is most situations in the business world, but it is vastly under-emphasized. Making a decision without significance testing is increasingly poor practice in an increasingly data-driven world. It’s time that changed.