Free Shipping Today On Orders.

The Role of Anecdote vs. Scientific Evidence the Running Form and Footwear Debate

Posted on November 08 2011

In my previous post I shared my thoughts on the current debate about running form and footwear – that post was triggered by reading the comments related to Christopher McDougall’s recent article in the New York Times Magazine. A number of commenters criticized McDougall for basing his article off of a few individual anecdotes (his own personal history included). Several people asked for the scientific “evidence” that barefoot running form is better, ignoring the fact that this was a popular magazine article and not a peer reviewed journal article (and that several notable scientists were quoted – e.g., Benno Nigg).

I’m going to avoid specific reference here to the NY Times article as others have done so in depth (e.g., this post by Alex Hutchinson). Rather, the constant back and forth about science vs. anecdote deserves some commentary, and so I’d like to share a few of my thoughts on this more general issue.

1. Anecdotes have value. No, anecdotes are not the same as a controlled scientific studies, and the plural of anecdote is not fact. But, anecdotes are examples where something has (or has not) worked for an individual. Someone tries something out, say a particular shoe, and they observe a positive benefit. Does this mean that every person is going to obtain the same benefit from trying that shoe? No, but for people suffering from a similar chronic injury, anecdotal reports of positive experiments can be very helpful. How often have you provided advice to a fellow runner in need? If a friend asks you how you overcame your own bout with ITBS or plantar fasciitis, would you simply say “I’m not telling you because there’s no scientific study proving that my approach is best.” Doubtful. We runners constantly share what works and does not work amongst our community, whether it be about injuries, training methods, or which race is best run for a BQ. It’s what makes the running community so vibrant, and such an amazing group to be a part of.

Though they may not be considered to be as valuable as randomized, controlled experiments, anecdotes do play an important role even in science and medicine. The case study is a classic learning tool in medical schools, clinicians often rely on past, personal experience with “what works” in devising treatment plans for a given patient, and case reports are commonly published in the medical literature (here’s a recent one on the effect of foot strike modification on knee pain outcomes in 3 patients). Personal and clinical experience are based on a foundation of anecdotes, and to discount this reality is a mistake.

All of this being said, it is also critical to not take lessons learned from anecdotes and apply them too broadly. Just because something worked well for one person does not mean it will work well for all. Should everyone go out and run on their forefeet under all circumstances? No, absolutely not. For some people this is probably a recipe for disaster (e.g., women who wear high heels all day), and even back in the 1950’s marathoners did not typically land on their forefeet – very few marathon runners are true forefoot strikers. Figuring out how broadly to apply anecdotal experience is where scientific study comes into play. We observe that some intervention works through anecdotal responses, then do careful, controlled testing to see just how broadly this response occurs when replicating the same intervention in different individuals. Maybe the anecdotal benefit was a fluke, maybe not. But both anecdote and controlled, rigorous study have a role in science and medicine.

2. Science has limits too. Controlled studies are of great value, but their application has limits. Pertinent to my thoughts here are two quotes that Steve Magness recently posted to twitter:

If you are advising or treating individuals according to the average effects, you may be doing the wrong thing” via Evolution in 4 Dimensions

Averages mask individual variation- that’s the whole point of an average.”

The problem with scientific studies, as I’ve discussed previously on this blog, is that all to often they compare group means. Lets say we took 90 new runners and randomly assigned them to either the Nike Pegasus, Nike Free 3.0, or Vibram Fivefingers (30 in each group). We devised a careful progression in mileage, and tracked their injury status for 6 months. Suppose 50% of Vibram runners reported an injury in those 6 months, whereas 40% of Pegasus runners reported an injury, and only 20% of Nike Free runners reported an injury. Suppose the differences in injury risk in the three shoes were found to be statistically significant. Based on a result like this, should we conclude that the Nike Free is the best shoe and the Vibrams should be pulled from the market? Perhaps, and I’m sure Nike marketing would be all over this, but the results only indicate that the Free performed better on average than the other two shoes. What if the 50% who were not injured in Vibrams were able to run for the first time in their life without significant pain? Should we advise them to stop immediately because the shoe performed worse than the others? Should we tell the 60% of runners in the Pegasus who had no problems that they should switch to Frees, or should we just advise that they stick with the shoe that’s currently working? What if the 20% who got hurt in Frees had really nasty injuries and we could determine that they were caused by the shoe being wrong for their foot? I think you get the point. Studies like this can provide helpful guidance, but they generally don’t have much to say about what is best for the individual. When it comes to running, individuals don’t care about average group responses, they care about what is going to keep them running injury-free out on the road or trail.

Again this does not mean that a study like this does not have value. In fact, a study just like this is currently being conducted by Michael Ryan at the University of British Columbia – see video below:

The value here is that this type of study let’s us hone in a bit on why some people do better in one shoe than another. It provides us better guidance on where to steer new runners who are looking for a shoe – i.e., it helps us better define a starting point. That’s why I found Ryan’s study on pronation control systems in shoes to be so valuable. It’s not that the stability shoes or neutral shoes never work – for many people they work just fine. But, for others they don’t, and Ryan’s study showed that the way we assign a shoe to a runner appears to be deeply flawed. Given this, I’d generally take the anecdotal experience of a running partner over most over most other sources when it comes to choosing a new shoe. My friends know what I like and dislike in a shoe, and they know my needs/preferences better than any scientific paper or store clerk. That’s not to say that a knowledgeable store clerk is not someone to listen to – it’s just that it can be hard to know at first who you’re dealing with. Is it someone working a summer job who is following an outdated company fitting policy, or is it an open-minded individual with long and deep experience with lots of shoes? The latter is what you want when you walk into a running store.

At the end of all this I hope I’ve made the point that anecdotes should not simply be dismissed, and in some cases they might be even more valuable than the results of a scientific study. As a scientist myself it might be a bit heretical to say something like that, but it’s what I believe. McDougall’s conclusions may not have been based upon a foundation of studies that clearly demonstrate that barefoot-style running is indeed better, but neither have the shoe companies designed shoes based upon rigorous and open scientific efficacy trials. Don’t believe me? Here’s a quote from Ryan’s recent paper in the British Journal of Sports Medicine: “…despite over 20 years of stability elements being incorporated in running footwear there is, as yet, no established clinically based evidence for their provision.” And what of anecdote? Running magazines and books are full of advice based on them, it’s not just McDougall’s article. In fact, l admit to being guilty of it myself – all of the shoe reviews I’ve ever written amount to nothing more than personal anecdotes.

Maybe if McDougall hadn’t referred to the 100ups as foolproof or if he had attached a disclaimer that “individual results will vary” this would have appeased some of the more critical readers. I do agree that there is risk associated with advocating too hard for a particular running style or footwear type – some people get enthused with a new idea, push too hard with it, and get hurt. Determining just how strong we should push one type of footwear over another is where I do think the more rigorous scientific studies like Ryan’s will help to shake things out. But, I’m certain that his results will not end this debate

More Posts

0 comments

Leave a comment

All blog comments are checked prior to publishing