The approaches at present fail to distinguish patients at low versus high risk for ventricular arrhythmias due to overreliance on left ventricular ejection fraction measure. In this study, statistical machine learning was used to identify time‐varying risk predictors and crucial cardiac imaging.

In total, 382 cardiomyopathy patients (left ventricular ejection fraction ≤35%) underwent cardiac magnetic resonance imaging before primary prevention implantable cardioverter-defibrillator (ICD) insertion. The primary endpoint was appropriate ICD discharge or sudden death. The time variable covariates consisted of left ventricular ejection fractions and interval heart failure (HF) hospitalizations. A random forest statistical method for multivariable, longitudinal, and survival outcomes was used in this study. The statistical method involved both time‐varying and baseline variables, and compared with the Seattle Heart Failure model scores, random forest survival and Cox regression models containing the baseline characteristics devoid of imaging variables. The primary endpoint (n=75) occurred at 3.3 years. Hospitalization rates for baseline cardiac metrics and HF significantly improved ventricular arrhythmic risk prediction.

In conclusion, our proof‐of‐concept study highlights the potential to improve individualized risk assessment for ventricular arrhythmias by incorporating both a temporally varying risk factor and baseline covariates. Our results support the importance of the complex interplay of pathophysiologically driven markers of the underlying myocardial substrate, temporal changes in clinical HF status, and systemic inflammation in identifying an increased risk for ventricular arrhythmias.

Ref: https://www.ahajournals.org/doi/10.1161/JAHA.120.017002

Author