Nationally recognized and highly regarded health systems have a variety of ways to expand the reach of their “special sauce.” For some, it’s to launch outpatient centers in the suburbs. For others, it may be to develop clinical decision support tools based on their—laudable—best practices. While exporting best practices has a definite appeal, decision support tools linked to the expert opinion of one group of clinicians has several limitations. Ideally, decision support tools should first rely on evidence-based content in any and all circumstances in which there have been rigorous, double-blind studies confirming a particular indicated care path. Expert opinion should be used secondarily, to bolster that evidence in scenarios for which the research reveals conflicts or a lack of conclusions. Practice guidance based upon solid, gold-standard evidence requires constant diligence, rigorous exclusion of bias, and preference of scientific knowledge over opinion; adherence to these principles is demonstrably variable in recommendations from health systems and from medical specialty societies.
Erasing geographical inequalities and expanding the best care to all patients is an admirable goal. But there are risks to an approach that privileges the advice of a select group of healthcare providers. For instance, these solutions may “bake in” institutional biases. These solutions may also create the assumption that what works at one health system will work equally well at another, despite differences in governance structures, patient populations, or reimbursement profiles. In addition, clinicians are rarely quick to alter their entrenched best practices. In fact, the Institute of Medicine estimates that it takes an average of 17 years for a new medical advancement to become a common practice, and there is considerable change and standards reversal that emerge constantly and must be attended to with currency and flexibility that often is difficult to achieve in care system processes
Explicit Versus Implicit Knowledge
In medicine, we can either trust a small group of exemplary clinicians or the knowledge gleaned from the comprehensive analysis and curation of current peer-reviewed literature. In other words, we can rely on “explicit” knowledge versus “implicit” or tacit knowledge.
Explicit knowledge is usually stored in documents and databases and can be readily articulated, codified, accessed, and verbalized. In medicine, that includes peer-reviewed studies in medical journals that draw conclusions from data that’s curated from a wide swath of healthcare facilities and which have reproduceable results.
For example, smartphone GPS applications calculate the recommended route using explicit knowledge, including current traffic, mileage comparisons, and travel times by highway versus local roads. Unlike asking a few locals the best route during rush hour, GPS calculations are “intuition-free” and deliver decision support based on pre-defined rules and logic, rather than personal preference.
Meanwhile, implicit or tacit knowledge is rooted in context, experience, and values. It can be hard to communicate, as it often resides in the mind of an individual. In healthcare, implicit knowledge can be a great source of competitive advantage, as any healthcare system is only as good as its clinicians. But it also privileges the preferences of a small group, based on anecdotes that may not reflect the findings of medical research at large. If clinicians can link to clinical decision support tools that are based on a wide base of evidence, they can continually compare their own preferences and habits against the wisdom found in peer-reviewed journals.
Clinicians must sometimes make decisions against a backdrop of inconclusive or contradictory evidence. A decision support tool based on the recommendations of a small group may appear to erase ambiguity by offering recommendations from a select group, without revealing the differing confidence levels of those judgments. By contrast, using evidence-based content to guide decisions can highlight any conflicts within the literature, giving the frontline healthcare provider the nuanced information they need to make their own best judgment.
The Limits of the “Secret Sauce”
It’s understandable and laudable that these recognized health systems would try to leverage their reputation and export best practices to a larger market. But a clinical decision support tool that relies on a relatively small, homogenous pool of experts in a unique environment may introduce risk in the form of institution-specific biases and assumptions. Such solutions may also fail to quickly address advances in the medical evidence.
In the same way a mutual fund mitigates the risk of investing in just a few stocks, a clinical decision support tool based on a comprehensive review of the medical literature can help mitigate risk and leverage the wisdom of the crowd. While a single health system, specialty medical society, or organization may have clear recommendations on their best practices for serving their patients, these protocols may not universally translate into improved care across specialties or geographies.
Patient populations, infrastructure, and other factors differ between institutions, but clinical decision support tools should be applicable to any institution and firmly based to the extent possible upon adjudicated scientific clinical studies. The best clinical decision support tools should also contribute to the goal of becoming a “learning health system” that embraces new advances in far less time than 17 years. The insights of our best clinicians should not be discounted. However, clinical decisions should always weigh a combination of human judgment and intuition-free data based on the best clinical evidence.