J Anesthesia & analgesia. June (14) 2013
A small excess of positive results after thousands of trials is most consistent with an inactive intervention. The small excess is predicted by poor study design and publication bias. Further, Simmons et al (2011) demonstrated that exploitation of “undisclosed flexibility in data collection and analysis” can produce statistically positive results even from a completely nonexistent effect. With acupuncture in particular there is documented profound bias among proponents (Vickers et al., 1998). Existing studies are also contaminated by variables other than acupuncture – such as the frequent inclusion of “electro-acupuncture” which is essentially transdermal electrical nerve stimulation masquerading as acupuncture.
The best controlled studies show a clear pattern – with acupuncture the outcome does not depend on needle location or even needle insertion. Since these variables are what define “acupuncture” the only sensible conclusion is that acupuncture does not work. Everything else is the expected noise of clinical trials, and this noise seems particularly high with acupuncture research. The most parsimonious conclusion is that with acupuncture there is no signal, only noise.
No doubt acupuncture will continue to exist on the High Streets where it can be tolerated as a voluntary self-imposed tax on the gullible…believing claims that are simply untrue.
It is clear from meta-analyses that results of acupuncture trials are variable and inconsistent, even for single conditions. After thousands of trials of acupuncture, and hundreds of systematic reviews (Ernst et al., 2011), arguments continue unabated. In 2011, Pain editorial summed up the present situation well:
“Is there really any need for more studies? Ernst et al. (2011) point out that the positive studies conclude that acupuncture relieves pain in some conditions but not in other very similar conditions. What would you think if a new pain pill was shown to relieve musculoskeletal pain in the arms but not in the legs? The most parsimonious explanation is that the positive studies are false positives. In his seminal article on why most published research findings are false, Ioannidis (2005) points out that when a popular but ineffective treatment is studied, false positive results are common for multiple reasons, including bias and low prior probability.”
Since it has proved impossible to find consistent evidence after more than 3000 trials, it is time to give up. It seems very unlikely that the money that it would cost to do another 3000 trials would be well-spent elsewhere.