Anecdotal Evidence vs. Clinical Evidence
Your grandmother's remedy worked for her. A clinical trial worked for a thousand strangers. Neither one tells the whole story.
Anecdotal evidence is personal experience: "I took this supplement and felt better." Clinical evidence comes from controlled studies with larger groups, designed to separate real effects from placebo, coincidence, and natural recovery. The standard view is that clinical evidence is serious and anecdotes are noise. That's half right. But the other half matters too.
Clinical trials have real limitations. They study averages, not individuals. They often exclude the elderly, people with multiple conditions, and anyone who doesn't fit the protocol. Their results can be shaped by funding bias, p-hacking, and publication bias. A trial saying a drug works "on average" doesn't mean it works for you — and it doesn't mean the person whose anecdote you dismissed was wrong about their own experience.
The honest position is that both forms of evidence have blind spots. Anecdotes can mislead because one person's experience doesn't generalise. Clinical evidence can mislead because averages hide individual variation and the research process itself is prone to distortion. Dismissing either one entirely makes you easier to fool, not harder. The useful question isn't "which type of evidence wins?" It's "what does each type actually tell me, and what does it leave out?"
References
- Ben Goldacre — Bad Science (2008)
- Richard Harris — Rigor Mortis (2017)