Talking Numbers: The Significance of Statistical Significance

Standard

Statistical significance XKCDStatistical Significance. It’s so basic a concept that many modelers and statisticians don’t look for it in simpler analyses such as marketing campaign comparisons, population distributions and so forth. Nevertheless, statistical significance can make all the difference (pun fully intended) in whether results are jaw-dropping or trash-worthy. And when speaking with the business-side of the house, you’d better know which is which.

Tip for Translation: Less Insignificant Info is More

First of all, check for significance. That could go without saying… but it doesn’t (see above). If the results are not significant then do not put them out there for the business to jump all over… which they will. If they are close to significant and you want to share them, add a caveat/footnote indicating that the results may be the result of random variation. Do not use technical terms here.

Two common times that analysts try to give flack about this:

“Yeah, but my tests always appear significant because I have enough data that even tiny differences get picked up due to the sample sizes.”
Try bootstrapping smaller samples for comparison and see what happens. If you are still getting significant results, good on ya’. If not, maybe the results weren’t as significant to start with.

“But the business impact of this minuscule difference is huge so therefore it doesn’t matter if the result is significant.”
This is actually even more reason to validate your results. Presumably, if the resulting teeny difference would cause a major upheaval for the business, so too would a minor variation due to natural fluctuation. Use a significance level that aligns with the importance of finding a difference. For a super-important analysis, go with an alpha of .01 or even .001 rather than the usual .05.

To Show or Not to Show Statistical Significance…

More often than not, business stakeholders only want to see results that are significant. They want to know how the analysis can be used to better effect. In general, that does not mean that you go around flaunting a p-value. Just state the results and how to use them and move on. On the other hand, it is usually informative and interesting to business users to see statistically insignificant results when it confirms or debunks a long-held hypothesis.

For example, let’s say that there is a “gut feel” that customers who buy diapers also buy beer. After doing some testing on purchase data, you find that there is no significant link between these two product categories. The business stakeholders (and holders of the gut feeling) would likely need to know that these two items are not correlated. It impacts store and display layouts for the future.

Got some great examples of significant results that really weren’t? Or times when instincts were proven right/wrong? I’d love to hear all about your adventures with statistical significance in the comments.

Talking Numbers: What is Your Quest?

Standard

Monty Python and the Holy Grail Bridgekeeper

What is your name?

What is your quest?

What is your favorite color?

Arguably the most difficult skill for any analyst to learn is how to effectively communicate the numbers to the business. This act of translation is the most important moment in any analysis because it is when we, as number crunchers, get the buy-in (I know, corporate-speak) of people who actually do stuff with what we’re telling them (a.k.a. decision makers) … or not.

Tip for Translation: Remember the Dependent Variable

Continue reading