Data suggest that documentation is among the most time-consuming and costly aspects of using an electronic health record (EHR) system. Speech recognition (SR) technology—the automatic translation of voice to text—has been increasingly adopted to help clinicians complete their documentation in a more time- and cost-effective manner. However, little is known regarding how SR can be used safely and efficiently in healthcare settings.

For a study published in JAMA Network Open, my colleagues and I aimed to understand and analyze errors in dictated clinical documents assisted by SR software and professional transcriptionists. We developed a comprehensive schema for identifying and classifying errors. We then retrieved 217 dictated clinical documents from two integrated healthcare systems at three stages—the original SR transcription, the transcriptionist-edited version, and the signed note in the EHR—and annotated them for errors.

We found that 7.4% of words in unedited, SR-generated documents involved errors, and one in 250 words involved a clinically significant error. At least one error existed in 96.3% of raw SR transcriptions, and 63.6% had at least one clinically significant error. However, error rates fell significantly after review by a medical transcriptionist (0.4%), and further still after the clinician reviewed the edited transcript (0.3%), highlighting the crucial role of manual review in the SR-assisted documentation process.

Our findings demonstrate the importance of manual review, user training, quality assurance, and auditing for ensuring the accuracy of SR-assisted documentation. Automated error detection and correction methods employing natural language-processing technology may help further reduce errors.

Author