Do What You Do Well

It sounds like a good idea at first – measure the things you do well and let that be the guiding light for operational support. However, this “Texas Sharpshooter” approach, although very popular, is biased, corrosive, and ultimately going to bring you face to face with an immovable reality traveling in the opposite direction.

Many quality and performance metrics are constructed around this approach, however, and metrics are often selected based on how convenient they are to collect, whether they are supportive of the executive’s opinions, or by the degree to which they celebrate the areas in which one can safely assume there is a low risk of embarrassment. Because it systematically ignores waste, error, and risk, this “Joy vs. Embarrassment” approach to measurement tends to result in blind spots, slower organizational learning, and increased risk of crises and catastrophes.

If you like surprises and excitement, this is the right method for you!

Value-Based Measurement

This approach is far harder, but also much more realistic and effective in reducing risk and waste, and improving outcomes. It involves having a very good idea of what the vision, mission, and objectives are, and being able to map out a value chain of the processes that are key to realizing the organizational goals. Once the key processes are identified, a measurement plan can be developed to probe points along the value chain in order to quantify the associated process, outcomes, and balancing metrics.

Process metrics are the leading indicators that will act as unbiased predictors of whether you will reach the objectives at the current achievement rate. Process metrics warn you if something is starting to drift off the road, or can set your mind at ease that things are going according to plan. A good set of process metrics takes much of the guesswork out of managing a process, and reduces risk for all stakeholders. Effective process metrics enable QI and management to detect early signs of variance and error, and take targeted corrective action in good time.

Outcomes metrics are those that announce whether you have achieved the goals or not, and by what margin you did so. Without valid, reliable, and repeatable outcomes metrics, you simply will not know if you succeeded or not. A big gap in many current measurement frameworks is the lack of patient-centered outcomes measures. Very often, what are claimed to be measures of outcomes are actually rather sloppy proxies for outcomes. Executed surgeries, prescribed medications, and completed encounters, for example, are pseudo-outcomes measures that may not be unbiased predictors of whether the patient is actually better or has achieved their goals. It is therefore important to bulk up on metrics that quantify patient outcomes, rather than rely on doubtful proxies.

Balancing metrics are safety guards that alert us when the process is causing risk or harm elsewhere. Effective balancing metrics protect from siloed outcomes metrics, and also monitor for side effects of interventions that may be harming the overall mission. In many cases, achievement in one silo can undermine the capability or capacity of others, and success in one process area is often achieved at the cost of net failure of the overall mission. Without good balancing metrics, there is a risk of doing more harm than good, and not noticing the harm until it is too late. For example, increased successful dialysis episodes is a good outcome, but not if it is displacing successful transplants. A balancing metric that monitored dialysis vs surgery could detect that the unit was harming long-term patient outcomes by emphasizing one to the detriment of more desirable outcomes.

Stealing Your Heart

Constructing your own measurement framework as described above takes a lot of time, study, and work, Stealing somebody else’s measurement framework and adapting it to your own situation is often cheaper, faster, and almost as good. I therefore happily encourage you to do so. Spend some of the time on seeing what other organizations are performing really well, have mature measurement processes, and are using them to drive improvement, and copy the best one.

After stealing a measurement framework (usually just asking nicely), it should be compared to the existing policies and processes. If your policies and processes differ from the champions, ask why, and consider changing yours. Do not stick with a policy or process just because that is how it has always been done. While you do this, you may spot areas that can be customized to suit any special features of your patient population or environment, and you should remain alert for opportunities to improve on what you steal.

Doing so may enable you to leapfrog the champions.

Wrapping it up in a Bow

No matter how good your quality and safety measurement framework is, reality will inevitably drift away, and therefore regular monitoring is necessary. Whichever approach you chose to take, the work is not done until you implement a measurement system analysis (MSA), and build in processes to regularly check that your metrics are still valid and reliable. Validity in this sense refers to whether the metrics are quantifying reality, and not just coming up with empty numbers, while reliability refers to the metric being able to quantify in a predictable and stable way, and that it does not fluctuate over time or situation.

Finally, in addition to routine monitoring and evaluation (M&E) of the measurement framework, you need to set triggers that will alert you if things have drifted between routine MSA observations. This embraces the single vs double loop learning described by Chris Argyris, and involves setting triggers that will alert you if any situation has occurred that should not have. “Never Events,” for example, should trigger a review of policies, processes, and measurement framework to see why no process, outcomes, or balancing metric alerted you of an out of limits risk.

Author