Many patient and staff surveys are so deeply flawed that they only provide the impression of usefulness, while actually being misleading and hiding the value of what respondents have shared. There are three fundamental flaws in these surveys, and addressing them can add significant value, and turn bad survey instruments into sources of value.


 

Alright, I know there is a pandemic going on, people are screaming abuse at clinicians about having to wear little pieces of cloth over their faces, and you are exhausted— physically, mentally, and spiritually.

But hear me out here.

This is exactly the time to spend a few minutes nostalgically reading something arcane about surveys. Surveys are the kind of humdrum thing that still carries on no matter what goes on outside—like getting a parking ticket during an air raid. Even if aliens started bombarding the planet with old shoes and used diapers, we would still be surveying staff and patients.

This blog, though I say it myself, is a delightfully arcane exploration of a hidden world within surveys. It describes three nested reasons why your facility’s current surveys are probably not adding much value, and how you could change that. Alternatively, you will have three great ways to torpedo HR’s next pesky survey initiative.

There are three reasons most hospital surveys of staff or patients may be a steaming pile, and I haven’t seen enough on these particular elements of woe described in the literature. First, survey items tend to focus way too much on easily quantifiable things, even when those are foamy meaninglessness. Second, the scales are probably so low in construct validity that you might as well just scribble in random numbers instead. Third, they are totally asking the wrong questions—ones that encourage pure speculation and confabulation instead of drawing on the wealth of experience that staff or patients can provide.

1. Tyranny of the Quantifiable

Most of the attention and time usually gets spent on the survey questions that are presented as a numerical value or a scale, but in most cases, the real value is in the open-ended responses. In the open-ended text, patients and staff pour their little hearts out in the (mostly vain) hope that someone will listen and use what they say to improve things. At best, most analysis cherry-picks a few quotes to illustrate a graph or pie chart, but otherwise, they are discarded, and along with them, most of the value in the survey responses. Let me repeat: The quantifiable stuff is the least valuable part of most surveys but the part that gets 99% of the attention.

This is a crying shame, because the open text usually has specific, real-world examples of value, risk, and opportunity. It’s in these open text areas that staff or patients will describe a potentially serious safety issue, detail a specific problem, or suggest a really good opportunity to improve quality, safety, or efficiency. Quite often, they nominate things that could go straight into a PDSA improvement target, a Kaizen event, or an improvement notice.

2. Scale Divergence

Many of the scales used, especially the very popular Likert scales—like satisfied vs dissatisfied, good vs bad, agree vs disagree—have a severe issue, because they are actually two scales stuck together. You can see this if you snap the scale in half and pose both questions, each as a single scale. It is really important to know if people are both very satisfied AND very dissatisfied, rather than seeing a slightly skew neutral number that is the sum of the two. Seeing the halves gives you far more to work on and far better chances of protecting something that is valued and getting rid of or fixing something that people hate.

You may also notice if you compare the sum of the two half-question items that the ends don’t add up to what the respondent gave when you asked it as a single item—if you ask “1-5, how satisfied are your with X?” and also ask “how dissatisfied are you with X?”, you often get high scores on BOTH or low scores on BOTH, and the two don’t jive with the combined score before you snapped the scale in half. This is actually important to know, and just knowing the sum hides important facts about what is actually going on.

3. Wrong Question Modality

There’s this monster of a bug built into how we usually think about what people are capable of telling us. Surveys often (almost always) ask opinions or judgements, where they really should be asking about experiences. Asking someone to rate the usefulness of the patient portal, the helpfulness of the admissions staff, the effectiveness of bed management software, or how good the rostering is, etc. is actually a really bad way of doing the research. It breeds a cockeyed notion that people’s opinions are important in making decisions about policy, technology, or workflow. Actually, opinions are nearly useless. Being told that the bed management software has a low satisfaction rating tells you nothing about what to do. Experiences, on the other hand, are gold. If they tell you what they have experienced—slow startup, missing bed counts, boarding in ER because of no open med-surg beds, data that was lost, confusing instructions, slippery floors, broken instruments, wrong instruments, etc.—now that’s really useful info that is actionable. You can do something specific about an experience, and an experience is usually described with higher fidelity, validity, and reliability. The experience that “I slipped and nearly fell at the elevators on 5W” is a ton more useful than that one person rating “Office Safety” as a 3 out of 5, or answering “Mostly disagree” that floors are well maintained. A thousand low ratings, even of a specific construct like “floor conditions in your work area” is still nowhere as actionable as the specific experience.

So, What to Do?

If the aim is just to stop yet another feeble-minded HR initiative, then just use these three points to torpedo it below the waterline, and hopefully they will lose interest and go bug a different department.

If you want to start getting actual value out of your staff and patient surveys, do this in sequence: Revamp existing surveys by collecting better open-ended responses that focus on experiences. Then, re-engineer your next survey design by snapping the scales in half to get more validity.

Do the heavy lifting to use what you discovered in the previous open-ended, experience-based responses to develop single-scale questions that are experience-focused rather than opinion-focused.

Approaching surveys from the perspective of reducing the tyranny of the quantifiable, freeing scales that have been artificially welded together, and focusing on experiences instead of opinions, you can gain far more value from surveys and develop actionable insights that enable rapid improvement cycles that make an observable difference.

Author