Let’s say you’re a scientist who wants to do some research on the Tooth Fairy. You could design your study to determine if the Tooth Fairy leaves more money for a tooth left in a plastic baggie under the pillow than for a tooth wrapped in a piece of tissue (as we used to do in our family). Or you could look at the average amount of money left behind for the first baby tooth to fall out compared to the last tooth. Or perhaps you might attempt to correlate Tooth Fairy proceeds with the income of the toothless kid’s parents.
None of these would be good research, according to Dr. Harriet Hall, editor of Science-Based Medicine. She explains:
“You can get reliable data that are reproducible, consistent, and statistically significant. You think you have learned something about the Tooth Fairy. But you haven’t. Your data has another explanation, parental behaviour, that you haven’t even considered. You have deceived yourself by trying to do research on something that doesn’t exist.”
Dr. Hall’s lecture called Tooth Fairy Science and Other Pitfalls: Applying Rigorous Science to Messy Medicine was presented at the annual Skeptic’s Toolbox conference at the University of Oregon in August.
Hers is an eye-opening yet provocative inquiry into the differences between evidence-based research and science-based research, and the pitfalls of current conventional evidence-based medical research. For example, she outlines a number of things that can go wrong in research, such as:
- If a drug company funds the research, results are more likely to support their drug than if an independent lab does the study.
- If the researchers are true believers, all kinds of psychological factors come into play – and even if they do their best to be objective, they are at risk of bias.
- People who volunteer for a study of acupuncture are more likely to believe it might work; people who think acupuncture is nonsense probably won’t sign up.
- Three studies were done, but only one shows positive results – so that’s the one submitted for publication (called the “file drawer effect“).
- Most senior researchers delegate the day-to-day details of research to subordinates or grad students. Sometimes the peons in the trenches are just doing a job and trying to please their boss. They may feed false data to the author or suppress information they know the boss doesn’t want to hear.
- Sometimes when you read the conclusion of a study and go back to look at the actual data, the numbers don’t justify the conclusion.
- The report can’t possibly contain every detail of the research – what are they not telling us?
As if this list of problems isn’t discouraging enough, consider Dr. Hall’s warnings on these issues, too:
“Some countries only publish studies with positive results. In China, for example, if you published a study showing something didn’t work, you would lose face and might even lose your job. So I won’t trust results out of China until they are confirmed in other countries.
“When they calculate the statistics, they can use the wrong method or make mistakes. They can misinterpret the findings. The file drawer effect is when negative studies are not submitted for publication; publication bias is when the journals are less likely to publish negative studies. Inappropriate data mining is when the study doesn’t show what they wanted, and so they look at subgroups and tweak the data every which way until they get something that looks positive.
“And sometimes researchers outright lie and commit fraud to further their careers. Sometimes they get caught, sometimes they don’t.
“If there are only a few subjects, errors are more likely. If you study the net worth of five people and Bill Gates is one of the five, you get skewed results. In general, the more subjects, the more you can trust the results.”