System from Stanford helps extract and organize scanned referral records, saving time for clinicians

The electronic health record (EHR): Love it or hate it, it’s here to stay, but it’s a time suck. A clinician easily devotes 62% of a patient’s visit to referring to the EHR.

Referral records containing patient histories that can comprise hundreds of pages scanned into the EHR are one of the biggest challenges physicians face. Researchers at Stanford University have developed and tested an artificial intelligence (AI) system that may help.

This system “extracts and organizes relevant patient information and presents it to physicians alongside the entire scanned medical record in a web-based user interface,” Sidhartha R. Sinha, MD, from the Division of Gastroenterology and Hepatology, Department of Medicine at Stanford University, California, and colleagues wrote in JAMA Network Open.

In a prognostic study, 12 gastroenterologists each reviewed two referral records—an AI optimized record and a standard non-optimized record—and answered 22 questions that required them to search each record for clinically relevant information. Sinha and colleagues then compared the time it took to complete each review and the accuracy of the information these physicians found. Each physician also answered a survey about their experience—for example, would they recommend the system, as well as ways to improve it.

Not surprisingly, the AI optimized records and saved the physicians “18% of the time used to answer the clinical questions (10.5 [95%CI, 8.5-12.6] versus 12.8 [95%CI, 9.4-16.2] minutes; P=0.02),” Sinha and colleagues wrote. “There was no significant decrease in accuracy when physicians retrieved important patient information (83.7% [95%CI, 79.3%-88.2%] with the AI-optimized versus 86.0% [95% CI, 81.8%-90.2%] without the AI-optimized record; P=0.81).”

The physicians in the study had a generally positive view of the process, and most of them (11 of 12) preferred the AI-optimized review to the standard review. While there is a learning curve on its use, “11 of 12 physicians believed that the technology would save them time to assess new patient records and were interested in using this technology in the clinic.”

The clinicians estimated that AI systems could save from 5 to 30 minutes in reviewing records. In the study, mean time saved in reviewing new patient records was 14.5 minutes.

Using the AI system, as evaluated by looking at four records containing 136 pages, reviewing 119 pages had an 87.5% accuracy in date classification (95%CI, 80.9%-92.0%). For correctly classifying the information contained in 109 into the right category, there was a 74.3% accuracy (95% CI, 66.9%-81.7%).

“By contrast, a majority-class baseline, where the most common class in the data set is always predicted (’note’ in this case), achieved an accuracy of 50.0% (95%CI, 41.1%-58.8%). When evaluated on laboratory name extraction only, the laboratory extraction system achieved an F1 of 88.0% (95%CI, 82.35%-93.13%); when evaluated on both name and value extraction, the system achieved an F1 of 77.2% (95%CI, 67.9%-85.3%),” the study authors noted, adding that F1 is “the harmonic mean of positive predictive value and sensitivity.”

Richard J. Baron, MD, from the American Board of Internal Medicine, Philadelphia, wrote in an invited commentary that the study “offers a ray of hope that a promising solution could be on the horizon” for helping in better deciphering of referral records. And as he noted in this commentary: “Sometimes it takes a computer to solve problems created by a computer.”

The AI system used a pipeline of algorithms that “consisted of algorithms to (1) read text in PDF to extract dates, laboratory findings, and social history and (2) organize the record’s pages by content category (referral, fax, insurance, progress note, procedure note, radiology report, laboratory values, operative report, or pathology report),” the study authors noted.

The system first extracted dates from the pages scanned in the record. “Subsequently, we created an algorithm to identify laboratory values in the record and organized the results in a distinct table. A content categorization model was developed to organize the record by the following categories: referral, note, laboratory, radiology, procedure, operative report, pathology, fax cover sheet, or insurance,” Sinha and colleagues explained. “Finally, a page-grouping algorithm, using a convolutional neural network and textual heuristics, was developed to partition the record into its constituent documents.”

The optimized information was presented to the clinician via a web interface. “Displayed on the left side of the interface was a summary containing a list of document categories found in the record, along with hyperlinks to the original full PDF record, which was shown on the right side of the interface in its entirety. All the information in the original referral was put through these algorithms and categorized by the AI system,” the study authors wrote.

“Of course, there are many limitations to this study,” Baron wrote. “It included 12 willing physicians at 1 institution; it was performed in the specialty of gastroenterology; and there was some rate of miscategorization of uncertain clinical or patient safety consequence. However, it can and should be understood as a proof of concept: AI can be used as a first-pass technology to reduce the workload of a clinician who must wade through voluminous old records. The users had a variety of suggestions for improving the process; however, the fact that, after only a short online training, the physicians saved such a meaningful amount of time and were positively disposed to having the tool available in real life is truly impressive.”

  1. An AI system developed at Stanford, designed to extract and organize relevant patient information and present it to physicians alongside the entire scanned medical record in a web-based user interface, saved time for clinicians compared with non-optimized scanned records.
  2. Be aware this is a small prognostic study conducted at a single institution, which may limit generalizability.

Candace Hoffmann, Managing Editor, BreakingMED™

Sinha and colleagues had no disclosures.

Baron had no disclosures.

 

Cat ID: 507

Topic ID: 505,507,504,728,791,807,507,556,800,730,192,925

Author