There is nothing more convincing that someone citing research, and yet we often don’t know if what’s being cited is any good. Research can be bad if it’s poorly conducted or if the wrong evaluative method was used to answer the question. The methods we use in health care are often quite limited, especially when it comes to community interventions, which is why I have been working with the Institute of Medicine (IOM) to open up what we mean by evaluation. We’re holding a potentially groundbreaking meeting on August 27th, ‘Designing Evaluations For What Communities Value’.
The alleged gold standard in evaluation is the randomised, controlled trial, so much so that one often hears people try to convince on the basis of “RCT evidence”. This lazy citing of a method is one sure way to detect that someone doesn’t know what he or she is talking about (venture capitalists, take note) because a randomised trial is not always appropriate for the question being answered.
My personal grind with the RCT is in the ‘controlled’ bit. By controlling for things that might influence the impact of an intervention, researchers create such sanitised environments that what is learnt cannot be applied to the real world where everything is connected to everything else, often in unknowing ways.
There have been some notable attempts to address this, such as ‘pragmatic randomised, controlled trials’, ‘cluster randomised controlled trials’, and J-PAL’s ‘randomised evaluation’, but the truth is we don’t yet know what the best approach is to assess the value of community interventions.
There are two parts to this problem. Firstly, for reasons that are unclear, health care tends to limit itself to a small suite of evaluative techniques often organised as the so-called hierarchy of evidence. Secondly, although we know that all communities are different, and their problems unique to them, we continue to strive for a single ‘best approach’, seemingly to be able to compare interventions from one locality to the next. Although this aim is laudable, it repeats the fallacy of a ‘gold standard’.
There is no gold standard to assess a community intervention. Instead, there is a suite of evaluative techniques that can be used, depending on the community and the intervention being tried. The IOM meeting on August 27th will aim to illustrate this.
Given our desire to be anchored in the real world, we’ve been lucky enough to base the meeting on the needs of the communities entering the ‘Way to Wellville’ competition being run by Esther Dyson’s HICCup. These communities are being challenged to become healthier in whatever way works for them. Although HICCup may ultimately want a single evaluation framework for their ‘competition’ (something I completely disagree with), we’ve built a programme around the needs of a single Wellville community, with representatives from other Wellville communities present as observers.
We’re in the final stages of planning the event but the idea is that people of the community, aided by expert facilitators, will talk about what it’s like to live there and what they value when it comes to their health, while proponents of different techniques will jot down how they’d go about evaluating change in the community – whether it happened and whether it was positive or negative, based on what the communities value.
The key to evaluation in the real world, however, is flexing to what’s seen. HICCup’s competition will last five years. In year one, a community may see something negative and hence want to change their intervention. Doing so would likely require them to also change their approach to evaluation. If this continues year on year by the end of year five both the intervention and its evaluation may be significantly different – and community-specific – to what they started with. This is why it makes little sense to try to come up with a single evaluative framework for community interventions.
Health care’s limited approach to evaluation not only blinds it to the value that interventions create in communities but also keeps it ignorant of what communities actually care about. Opening our minds to other forms of evaluation helps us to better understand and respect the communities we serve. What’s so ground breaking about the IOM meeting is not the format, as such, but the simple notion that we should let communities define the interventions they want – some of which will have little direct link to the bio-medical definition of health.
We hope you’ll join us for this meeting but do hurry, spaces are limited.
Competing interests: The Institute of Medicine (IOM) is paying my travel and accommodation to speak and participate at this meeting, something that they’re doing for everyone listed on the agenda. The IOM’s ‘Collaborative on Global Chronic Disease’ is also collaborating with a group I am forming currently called the ‘Creating Health Incubation Group’. The ‘Collaborative’ and the ‘Incubation Group’ cover similar topics but there is no formal partnership, as such.
This post was first co-published on MedCity News and BMJ Blogs.