Palliative care programs are typically evaluated using observational data, raising concerns about selection bias.
To quantify selection bias due to observed and unobserved characteristics in a palliative care demonstration program.
Program administrative data and 100% Medicare claims data in two-states and a 20% sample in eight-states (2013-2017). The sample included 2983 Medicare fee-for-service beneficiaries age 65+ participating in the palliative care program and three matched cohorts: 1) regional 2) two-states and 3) eight-states. Confounding due to observed factors was measured by comparing patient baseline characteristics. Confounding due to unobserved factors was measured by comparing days of follow-up, and six-month and one-year mortality rates.
After matching, evidence for observed confounding included differences in observable baseline characteristics including race, morbidity and utilization. Evidence for unobserved confounding included significantly longer mean follow-up in the regional, two-state, and eight-state comparison cohorts, with 207 (p<.001), 192 (p<.001), and 187 (p<.001) days, respectively, compared to the 162 days for the palliative care cohort. The palliative care cohort had higher 6 month and 1-year mortality rates of 53.5% and 64.5% compared to 43.5% and 48.0% in the regional comparison, 53.4% and 57.4% in the two-state comparison, and 55.0% and 59.0% in the eight-state comparison.
This case study demonstrates that selection of comparison groups impacts the magnitude of measured and unmeasured confounding, which may change effect estimates. The substantial impact of confounding on effect estimates in this study raises concerns about the evaluation of novel serious illness care models in the absence of randomization. We present key lessons learned for improving future evaluations of palliative care using observational study designs.

Published by Elsevier Inc.

Author