cartoon shows a person coughing

Improved Cough-Detection Tech Can Help With Health Monitoring

The advance makes it easier to monitor chronic health conditions and predict health risks such as asthma attacks.


For Immediate Release

Researchers have improved the ability of wearable health devices to accurately detect when a patient is coughing, making it easier to monitor chronic health conditions and predict health risks such as asthma attacks. The advance is significant because cough-detection technologies have historically struggled to distinguish the sound of coughing from the sound of speech and nonverbal human noises.

“Coughing serves as an important biomarker for tracking a variety of conditions,” says Edgar Lobaton, corresponding author of a paper on the work and a professor of electrical and computer engineering at North Carolina State University. “For example, cough frequency can help us monitor the progress of respiratory diseases or predict when someone’s asthma condition is being exacerbated, and they may want to use their inhaler. That’s why there is interest in developing technologies that can detect and track cough frequency.”

Wearable health technologies offer a practical way to detect sounds. In theory, models with embedded machine learning can be trained to recognize coughs and distinguish them from other types of sounds. However, in real-world use, this task has turned out to be more challenging than expected.

“While models have gotten very good at distinguishing coughs from background noises, these models often struggle to distinguish coughs from speech and similar sounds such as sneezes, throat-clearing, or groans,” Lobaton says. “This is largely because, in the real world, these models run across sounds they have never heard before.

“Cough-detection models are ‘trained’ on a library of sounds, and told which sounds are a cough and which sounds are not a cough,” Lobaton says. “But when the model runs across a new sound, its ability to distinguish cough from not-cough suffers.”

To address this challenge, the researchers turned to a new source of data that could be used to train the cough detection model: wearable health monitors themselves. Specifically, the researchers collected two types of data from health monitors designed to be worn on the chest. First, the researchers collected audio data picked up by the health monitors. Second, the researchers collected data from an accelerometer in the health monitors, which detects and measures movement.

“In addition to capturing real-world sounds, such as coughing and groaning, the health monitors capture the sudden movements associated with coughing,” Lobaton says.

“Movement alone cannot be used to detect coughing, because movement provides limited information about what is generating the sound,” says Yuhan Chen, first author of the paper and a recent Ph.D. graduate from NC State. “Different actions – like laughing and coughing – can produce similar movement patterns. But the combination of sound and movement can improve the accuracy of a cough-detection model, because movement provides complementary information that supports sound-based detection.”

In addition to drawing on multiple sources of data collected from real-world sources, the researchers also built on previous work to refine the algorithms being used by the cough-detection model.

When the researchers tested the model in a laboratory setting, they found their new model was more accurate than previous cough-detection technologies. Specifically, the model had fewer “false positives,” meaning that sounds the model identified as coughs were more likely to actually be coughs.

“This is a meaningful step forward,” Lobaton says. “We’ve gotten very good at distinguishing coughs from human speech, and the new model is substantially better at distinguishing coughs from nonverbal sounds. There is still room for improvement, but we have a good idea of how to address that and are now working on this challenge.”

The paper, “Robust Multimodal Cough Detection with Optimized Out-of-Distribution Detection for Wearables,” is published in the IEEE Journal of Biomedical and Health Informatics. The paper was co-authored by Feiya Xiang, a Ph.D. student at NC State; Alper Bozkurt, the McPherson Family Distinguished Professor in Engineering Entrepreneurship at NC State; Michelle Hernandez, professor of pediatric allergy-immunology in the University of North Carolina’s School of Medicine; and Delesha Carpenter, a professor in UNC’s Eshelman School of Pharmacy.

This work was done with support from the National Science Foundation (NSF) under grants 1915599, 1915169, 2037328 and 2344423. The work was also supported by NC State’s Center for Advanced Self-Powered Systems of Integrated Sensors and Technologies (ASSIST), which was created with support from NSF under grant 1160483.

-shipman-

Note to Editors: The study abstract follows.

“Robust Multimodal Cough Detection with Optimized Out-of-Distribution Detection for Wearables”

Authors: Yuhan Chen, Feiya Xiang, Alper Bozkurt and Edgar Lobaton, North Carolina State University; Michelle L. Hernandez and Delesha Carpenter, University of North Carolina at Chapel Hill

Published: Oct. 2, 2025, IEEE Journal of Biomedical and Health Informatics

DOI: 10.1109/JBHI.2025.3616945

Abstract: Longitudinal and continuous monitoring of cough is crucial for early and accurate diagnosis of respiratory diseases. While recent developments in wearables offer a promise for daily assessment at-home remote symptom monitoring with respect to more accurate and less frequent assessment in the clinics, important practical challenges exist such as maintaining user speech privacy and potential poor audio quality and background noise in uncontrolled real-world settings. This study addresses these challenges by developing and optimizing a compact multimodal cough detection system, enhanced with an Out-of-Distribution (OOD) detection algorithm. The cough sensing modalities include audio and Inertial Measurement Unit (IMU) signals. We optimized this multimodal cough detection system by training with an enhanced dataset and employing a weighted multi-loss approach for the ID classifier. For OOD detection, we improved the system by reconstructing the training data components. Our preliminary results indicate the robustness of the system across window sizes from 1 to 5 seconds and performs efficiently at low audio frequencies, which can protect user privacy due to illegibility or incomprehensibility at lower sampling rates. Although we found that the multimodal model is sensitive to OOD data, the final optimized robust multimodal cough detection system outperforms the singlemodal model integrated with OOD detection. Specifically, the optimized system maintains 90.08% accuracy and a cough F1 score of 0.7548 at a 16 kHz audio frequency, and 87.3% accuracy and a cough F1 score of 0.7015 at 750 Hz, even with half of the data being OOD during inference. The misclassified components mainly originate from nonverbal sounds, including sneezes and groans. These issues could be further mitigated by acquiring more data on cough, speech, and other nonverbal vocalizations. In general, we observed that the Audio-IMU multimodal model incorporating OOD detection techniques significantly improved cough detection performance and could provide a tangible solution to real-world acoustic cough detection problems.

Share This