A New Set of Eyes Researchers Develop First AI-Enabled Wearable Camera To Detect Drug Errors

Marylee WilliamsTuesday, October 22, 2024

A team including a CMU researcher has developed the first wearable camera that uses artificial intelligence to help prevent drug administration errors in medical settings.

Every year, roughly 1.2 million patients experience adverse outcomes associated with injectable medications, and these errors are estimated to cost about $5 billion. A team including a researcher from Carnegie Mellon University has developed the first wearable camera that uses artificial intelligence to help prevent such errors. 

"The goal of our system is to catch these drug administration errors in real-time, before the injection, and provide an alert so the clinician has a chance to intervene before any patient harm," said Justin Chan, an assistant professor in the School of Computer Science's Software and Societal Systems Department and the College of Engineering's Electrical and Computer Engineering Department.

In addition to Chan, the team included researchers from the University of Washington, Makerere University and the Toyota Research Institute.

The error rate for all drugs given in a hospital is about 5 to 10 percent and they can happen at all levels of care. To design the wearable camera system, researchers focused on training deep learning algorithms that could detect errors when a clinician transfers a drug from a vial into a syringe. These could be vial swap errors, which occur when the wrong vial is used or the drug label on the syringe is incorrect; or syringe swaps, when the label is correct but the clinician administers the wrong drug. To prevent errors, hospitals use safeguards like requiring barcode scanning for syringes, but in high-pressure situations, clinicians can forget to scan the drug's barcode or manually record its contents.

In their study, published today in npj Digital Medicine, researchers demonstrated how the AI-enabled wearable camera system could detect vial swap errors with a sensitivity of 99.6 percent and a specificity of 98.8 percent.

Chan said creating the deep learning system to detect errors as they happen was challenging because syringes, vials and drug labels are small and clinicians can inadvertently obscure them when handling them. Researchers collected a large training dataset from different operating room environments with different backgrounds and lighting conditions. They collected footage using a small head-mounted camera strapped to physicians' foreheads, and the camera was tilted down to film the providers' hands. Over 55 days, they collected 4K video footage of drug preparation events from 13 anesthesiology providers and 17 operating rooms in two hospitals.

"We designed the algorithm so that instead of reading the label text, which can be obscured, it only needs to catch a glimpse of visual cues like vial and syringe shape, label color, and font size for a short period of time to classify what the drug is," Chan said.

Now that researchers have shown the accuracy of the system, their next step is incorporating the system into smart eyewear that can provide visual or auditory warnings to clinicians before a drug is delivered to a patient.

"This work demonstrates how AI-enabled systems can serve as a second set of 'eyes' to improve healthcare practices and patient safety," Chan said. "When integrated into an electronic medical system, our system also opens up opportunities for automatic documentation of drug information and can reduce the overhead of manual record-keeping."

For More Information

Aaron Aupperlee | 412-268-9068 | aaupperlee@cmu.edu