Statistical Significance vs. Clinical Significance
A drug that lowers your blood pressure by 0.5 points can be statistically significant. It can also be completely useless.
Statistical significance means a result is unlikely to be due to chance. It says nothing about whether the result matters in the real world. A study with a hundred thousand participants can detect absurdly small effects — differences so tiny that no patient would ever notice them, no doctor would change a treatment for them, and no health outcome would improve because of them. But the p-value is under 0.05, so it gets published as a "significant finding."
Clinical significance is different. It asks: does this effect actually make a difference in someone's life? A cancer drug that extends survival by two days is statistically significant with a large enough sample. It's not clinically meaningful. Conversely, a treatment that helps a smaller subset of patients dramatically might fail to reach statistical significance because the sample wasn't big enough — and get dismissed as ineffective.
The gap between these two concepts is where a lot of misleading health claims live. Drug companies, supplement marketers, and headline writers all benefit from the confusion. "Statistically significant" sounds like "proven to work." It doesn't mean that. Whenever you hear that a study found a "significant" result, ask: significant by how much? Enough to matter to a real person, or just enough to pass a mathematical test?
References
- John Ioannidis — Why Most Published Research Findings Are False (2005)
- Ben Goldacre — Bad Science (2008)