Written by Brian Hughes on November 21, 2020
The newly released draft NICE guidelines for the management of “myalgic encephalomyelitis (or encephalopathy)/chronic fatigue syndrome” continue to cause a stir.
And rightly so. The new guidelines not only repudiate a heretofore favoured treatment approach for a particular illness, they also threaten to discredit an entire (albeit quirky) branch of medicine — and, for good measure, to cast clouds over significant swathes of psychology too.
Some of the content of the NICE documents is simply breathtaking. Here is an extract from expert testimonyprovided by Jonathan Edwards, professor emeritus of clinical medicine at University College London:
“Reviewing clinical trials in CFS/ME came as something of a shock to me in terms of methodologies considered acceptable…. What surprised me about PACE and other trials in ME/CFS was not so much that therapists were still using unproven treatments but that anyone should think it worth doing expensive formal trials with inadequate methodology.”
and
“Recent comments by three PACE authors in a published response to critique indicate how little the difficulties of expectation bias are understood. The authors say that they prefer the altered outcome criteria that they introduced post-hoc because they gave results more consistent with previous studies and their clinical experience.
They do not seem to realise that outcome measures need to be predefined in order to avoid exactly this sort of interference from expectation bias.”
In other words, they couldn’t see the problem with having marked their own homework.
In essence, this professor of clinical medicine was schooling the psychs on behavioural science. He displayed a better appreciation of psychology than many supposedly esteemed psychologists.
And he was absolutely correct.
Bad methods have been the bane of the psychological sciences for decades. Psychology’s so-called “replication crisis” is simply a manifestation of a deep-seated problem with standards.
Psychologists, collectively, have too long been tolerant of methodological amateurishness. Some of them actually seem to like it.
The key shift in NICE’s approach to ME rests on their new — and improved — attitude to evidence quality.
As Jonathan Edwards argues, in drug trials, poor quality evidence is automatically discarded, because it is recognised as having no value. Bad evidence is seen as equivalent to no evidence at all.
However, when it comes to treatments for ME, the psychologically-oriented professions have seemed entirely happy to rely heavily on bad evidence, on the basis that it is the only evidence they have.
But it seems NICE is no longer in the mood for looking on the bright side of mediocrity. For them it is a matter of No more Mr NICE Guy, as it were.
In psychology, the look-on-the-bright-side approach to deficient evidence is customarily encouraged — on the grounds, among other things, of collegiality and tone.
For a discipline that puts such store in critical thinking, it sometimes feels that psychology, at a corporate level, holds dissent in deep disdain. Many supposedly Very Important Psychologists have punched down harshly whenever critics have had the temerity to call them out on their bad research. Famously, one Ivy League professor bemoaned the “shameless little bullies” who publicly criticised his studies. Another notoriously denounced such academic whistle-blowing on the grounds that it constituted “methodological terrorism.”
This domineering posture has long been employed by the coterie of establishment figures who have promoted the psychological treatment of ME, particularly in the UK. The new stance from NICE suggests that their influence is now waning. Argument from authority no longer holds sway.
On their website, NICE present a full set of supporting documents that informed their new draft guidelines. Tucked away in a file entitled Evidence review G is a no-holds-barred evaluation of the research on “non-pharmacological” (i.e., psychology-based) ME treatments. It’s really quite something.
Table 8, for example, lists details of no fewer than 42 separate outcomes from studies that compare cognitive behavioural therapy (CBT) to usual care, heretofore held up as evidence of its efficacy as a treatment for ME and related diagnoses. A large number of these studies were conducted by investigators on the PACE Trial and their wider network of professional contacts and peers.
The NICE reviewers meticulously assessed every single study for methodological rigour. Considering how doggedly CBT has been defended by its advocates — supposed experts in psychological therapies, remember — the results of the evaluation are, quite simply, humiliating.
Of the 42 outcomes, 37 were graded as yielding “VERY LOW” quality evidence. The remaining five — apparently the cream of this crop — were graded as “LOW” quality. No study was deemed to be of a quality that was even passable, never mind actually “good”.
But that was just one table. There were more. Many more. A total of nineteen tables, in fact, in which NICE proceeded to pick through the details of a very sorry research literature. Overall, across no fewer than 172 CBT outcomes derived from the various studies, NICE graded the evidence for 153 (89%) as “VERY LOW” quality and for the remaining 19 (11%) as “LOW” on quality. Not a single study was found to have yielded evidence that exceeded that abysmal threshold.
A similar bloodbath befell studies of graded exercise therapy (GET). Of a total of 64 outcomes in studies of GET, NICE graded 52 (81%) as “VERY LOW” quality and 12 (19%) as “LOW” quality. Again, not a single study produced evidence any better than “LOW” quality.
The most common methodological problem identified in all these studies was “risk of bias.” We all know the reasons for this — dodgy control groups, absurd blinding, shameless goalpost-shifting, and the entire unseemly smorgasbord of PACE-style strategies that many of us have been endeavouring to highlight for years.
And yet, despite the fact that we all know about these shortcomings, it is still quite shocking to see them tabulated so extensively and so starkly by NICE.
At last, it seems, someone in authority is actually getting it.
Cartels, by their nature, rarely go down without a fight. In the old days, academics might seek to defend their position with arguments. In the twenty-first century, it’s all about denial and spin. Here are some soundbites from the various CBT and GET advocates exposed by NICE as producers of predominantly “VERY LOW” quality research, as relayed by their go-to public relations firm, the so-called Science Media Centre (emphases added by me):
“Cognitive behaviour therapy (CBT) and graded exercise therapy (GET) are evidence-based treatments for chronic fatigue syndrome (CFS) in that they facilitate reductions in fatigue and improve people’s quality of life if delivered by a qualified therapist.”
“I am aware that there has been controversy over these approaches but there has never been any evidence of harm and they remain the only evidence based treatment approach in CFS.”
“13 years ago there were only two treatments with clinical trial support, namely graded exercise therapy (GET) or cognitive behavioural therapy (CBT), and that has not changed over the years.”
“It is therefore a great surprise that this guideline proscribes or qualifies treatments for CFS/ME for which there is the best evidence of efficacy, namely graded exercise therapy (GET) and cognitive behaviour therapy.”
In every single case, each of these so-called experts describes CBT and GET as “evidence-based” despite the fact that NICE has exposed the purported evidence to be of such low quality as to be meaningless. The “evidence” they refer to is not evidence at all.
Either they haven’t actually read the NICE evaluation, or they just don’t care. But then people like this don’t have to defer to documents. They know everything already. They’re experts.
Such weapons-grade denialism is a key part of the problem.
Black is white. Up is down. We are right, no matter what anyone says.
You know an entire field is in trouble when its key authority figures get so publicly drunk on their own self-reinforcing privilege.
As a psychologist, I always get uncomfortable when psychology talks about its “replication crisis”. And I say this as someone who wrote an entire book called Psychology in Crisis.
Replication isn’t really the difficulty. Rather, it’s the people who are blind to the replication issue who create the actual mess. The problem isn’t bad methods. It’s the culture of denialism that surrounds those methods and which freely perpetuates their use.
NICE’s verdict on psychosocial treatments for ME amounts to nothing less than an utter repudiation. That it comes from an authoritative agency and is based on a thorough empirical review is extremely significant.
This is not just a turning point for people with ME, CFS, and related conditions — it is a high-profile exposure of exactly how, for years, entire subfields of the psychological sciences have been willing to overlook, if not embrace, shoddy standards.
In all respects, this public shaming is long overdue. It is richly deserved.
THE SCIENCE BIT
Ramblings, Offcuts, Purgations, Whims