It’s not an exaggeration to say that functional MRI has revolutionized the field of neuroscience. Neuroscientists use MRI machines to pick up changes in blood flow that occur when different areas of the brain become more or less active. This allows them to noninvasively figure out which areas of the brain get used when performing different tasks, from playing economic games to reading words.
But the approach and its users have had their share of critics, including some who worry about over-hyped claims about our ability to read minds. Others point out that improper analysis of fMRI data can produce misleading results, such as finding areas of brain activity in a dead salmon. While that was the result of poor statistical techniques, a new study in PNAS suggests that the problem runs significantly deeper, with some of the basic algorithms involved in fMRI analysis producing false positive “signals” with an alarming frequency.
Frankly, I’m not surprised. Scientists liked this stuff because it looks sexy. Journalists love it because it gives them pretty pictures to include in articles that over-glorify studies that, at best, showed a possible correlation in thought and brain areas.
So 40,000 papers have been published without the scientists ever really ensuring that their method of measurement was valid…
Here’s the abstract of the study that found these errors:
Functional MRI (fMRI) is 25 years old, yet surprisingly its most common statistical methods have not been validated using real data. Here, we used resting-state fMRI data from 499 healthy controls to conduct 3 million task group analyses. Using this null data with different experimental designs, we estimate the incidence of significant results. In theory, we should find 5% false positives (for a significance threshold of 5%), but instead we found that the most common software packages for fMRI analysis (SPM, FSL, AFNI) can result in false-positive rates of up to 70%. These results question the validity of some 40,000 fMRI studies and may have a large impact on the interpretation of neuroimaging results.
That’s pretty damning stuff. An entire scientific field is built on incorrect data.
Source: Software faults raise questions about the validity of brain studies – Ars Technica