SCIENTIFICALLY TESTING SCIENCE
-Winston Zeddemore, Ghostbusters
In the movie Ghostbusters, there's a fun scene in which Bill Murry is doing a scientific study. However, because his character (and likely Murry himself) is more interested in getting laid than doing science, he's destroying the entire study in order to flirt with and attract the cute college coed.
His department chair, upon announcing that the guys are being thrown out of CUNY, says:
Your theories are the worst kind of popular tripe, your methods are sloppy and your conclusions are highly questionable. You're a poor scientist, Dr. Venkman, and you have no place in this department or in this University.
And he's right. And that happens in science, not all people involved in the work are trustworthy and brilliant.
There's an article up on Real Clear Science which has some very good tips which every reporter and anyone who reads about a scientific discovery or theory needs to read. What Alex Berezow does is give 20 tools to analyze a scientific study to see how valid and reliable it is.
Because not every scientific study is actually trustworthy or factual. Some are done in order to get a specific result, and others are done with poor methods and cannot be trusted. And the tips suggested are very good to keep in mind when dealing with a report on scientific studies. Here are the first five:
1. Variation happens. Everything is always changing. Sometimes the reason is really interesting, and other times it's nothing more than chance. Often, there are multiple causes for any particular effect. Thus, determining the underlying reason for variation is often quite difficult.
2. Measurements aren't perfect. Two people using the exact same ruler will likely give slightly different measures for the length of a table.
3. Research is often biased. Bias can either be intentional or unintentional. Usually, it's the latter. If an experiment is designed poorly, the results can be skewed in one direction. For example, if a voter poll accidentally samples more Republicans than Democrats, then the result will not accurately reflect national opinion. Another example: Clinical trials that are not conducted using a "double blind" format can be subject to bias.
4. When it comes to sample size, bigger is better. Less is more? Please. More is more.
5. Correlation does not mean causation. The authors say that correlation does not imply causation. Yes, it does. It is more accurate to say, "Correlation does not necessarily imply causation" because the relationship might actually be a causal one. Still, always be on the lookout for alternate explanations, which often take the form of a "third variable" or "confounder." A famous example is the correlation between coffee and pancreatic cancer. In reality, some coffee drinkers also smoke, and smoking is a cause of pancreatic cancer, not drinking coffee.
Other tips include "beware of cherry-picked data," "Control groups are essential," and beware of extreme data." Of particular importance is an understanding of the meaning of terms. For example, the difference between "significant" and "important." In statistics, "significant" refers to something which is a stat large enough to not be random and enough to be measurable.
The threshhold most statisticians use is 0.05%, which is a pretty small number, but is their limit of what they can trust to be an actual event and not just something found out by chance. If they can get data beyond that level it is "significant," as in, not "insignificant" or too small to be trusted reliably as information.
So if someone says there's a "significant" loss of ice on the North Pole, that just means that there's been enough to measure reliably. It doesn't mean significant the way most people use it (sufficiently great or important to be worthy of attention; noteworthy). Since reporters will pass on this sort of thing without defining the term, the confusion is natural.
But something can be statistically significant but utterly unimportant. As the paper says, "If chemical X doubles your risk of disease from 1 in a million to 2 in a million, that's not an effect worth worrying about."
There's only one real concern I have with the writer, and its this bit:
Many people wrongly believe that there was no global warming in the 15-year-period spanning 1995-2009. But, the planet indeed kept warming up; the data just wasn't statistically significant.Except that doesn't mean the planet kept warming up. It may have, but the data was too small to measure or trust. And the range of variation means that it very well could have actually been cooling. In other words, the writer is making the exact same mistake he's warning about by misusing the word "significant." He's asserting something that the data does not show.
But overall, good article with good tools to understand science better.