Knowing when and how to apply established knowledge into practice is difficult. A recent article in The Lancet shows why. The thickness of the inner walls of the carotid artery is associated with cardiovascular disease; so much so that the American College of Cardiology (ACC) and the American Heart Association (AHA) support the use of a single measurement to risk-stratify patients with intermediate risk. With such heavy weight organisations backing the measurement you’d be forgiven for thinking this to be established knowledge and that should be applied to all processes within cardiology clinics. It’s easy to start imagining the uses: measure everyone’s carotid artery thickness, plot them on a graph, and start proactively intervening with those at the far end.
However the research that backs the use of this measurement isn’t that strong.
In fact, the guidelines describe it only as a “reasonable” test (this is based on the rigour of the research that underpins it). The article published in The Lancet finds no association between the rate at which the walls thicken and subsequent cardiovascular events.
Does this mean we abandon the test? If only life was that simple. Another study published in 2011 showed that the increasing thickness of the carotid artery wall was strongly associated with having a stroke. Clearly the indicative value of the thickness of the walls of the carotid artery is an area of knowledge that’s still in development. Rather unhelpfully – although perhaps realistically – an associated Comment in the journal concludes that “clinicians could continue to use a single measurement…if required” (italics mine). What do you do with a “could”?
These articles suggest that the value of carotid artery thickness as a measurement is yet to be fully established. But cardiovascular disease is a major source of morbidity and mortality worldwide so anything we can do to help clinicians focus their efforts can only be a good thing. Perhaps, then, there is a case for cautiously using the knowledge; with that caution reducing as the evidence becomes more and more clear. Certainly the ACC and AHA seem to think so.
Problems arise, however, when seemingly established knowledge is hard-wired into systems that support the delivery of care. As doubts grow as to the value of specific clinical markers, organisations need to be able to pull knowledge out of their systems, or at least moderate how that knowledge is used. In essence, they have tosoft-wire their systems.
But how do you soft-wire a system?
There is no easy answer but in my experience there is a general principle: accept that there is no such thing as established knowledge and build systems that are sensitive to change and able to respond.
This is not easy, I realise, but all too often I have come across organisations and processes that claim to be aligned with “the evidence” (whatever that means) with no clear mechanism for staying abreast of changes. I have also rarely seen the kind of clinical governance that is needed around deciding whether and/or how changes in knowledge should change processes.
At best, this is irresponsible; at worst it’s dangerous.
Post script: 9th July 2012: Readers interested in getting to grips with local clinical governance issues may find the following article of value: “Governance for clinical decision support: case studies and recommended practices from leading institutions“.
This post was first published on my original blog, Optimising Clinical Knowledge, and co-posted on BMJ Blogs.