EAR: Audio Based Mobile Health Diagnostics.

Funded by the European Research Council

This is a 2.5M EURO funded European Research Council Advanced Research Grant awarded to Prof Cecilia Mascolo.

Mobile health is becoming the holy grail for affordable medical diagnostics. It has the potential of linking human behaviour with medical symptoms automatically and at early disease stage; it also offers cheap deployment, reaching populations not generally able to afford diagnostics and delivering a level of monitoring so fine that will likely improve diagnostic theory itself. The advancements of technology offer new ranges of sensing and computation capability with the potential of further improving the reach of mobile health. Audio sensing through microphones of mobile devices has been recognized as a powerful and yet underutilized source of medical information: sounds from the human body (e.g., sighs, breathing sounds and voice) are indicators of disease or disease onsets. The current pilots, while generally medically grounded, are potentially ad-hoc from the perspective of key areas of computer science; specifically, in their approaches to computational models and how the system resource demands are optimized to fit within the limits of the mobile devices, as well as in terms of robustness needed for tracking people in their daily lives. Audio sensing also comes with challenges which threaten its use in clinical context: its power-hungry nature and the sensitivity of the data it allows to collect. This work proposes a systematic framework to link sounds to disease diagnosis and to deal with the inherent issues raised by in-the-wild sensing: noise and privacy concerns. We exploit audio models in wearable systems maximizing use of local hardware resources with power optimization and accuracy. Privacy will arise as by- product taking away the need of cloud analytics. The framework will embed the ability to quantify the diagnostic uncertainty by embedding this as a first-class citizen in the model and consider patient context as confounding factors through cross sensor modality components which take advantage of additional sensor input indictive of the user behaviour.
As part of the project we have launched a COVID-19 Sounds crowdsourced data collection: read more about it here.

People

Publications

Exploring Automatic COVID-19 Diagnosis via Voice and Symptoms from Crowdsourced Data.
Jing Han, Chloe Brown, Jagmohan Chauhan, Andreas Grammenos, Apinan Hasthanasombat, Dimitris Spathis, Tong Xia, Pietro Cicuta, Cecilia Mascolo.
In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP21). June 2021. [PDF].

A First Step Towards On-Device Monitoring of Body Sounds in the Wild.
Shyam Taylor, Jagmohan Chauhan, Cecilia Mascolo.
In Procs. of International Workshop on Computing for Well-Being (WellComp2020). September 2020. [PDF] Best Paper Award.

Exploring Automatic Diagnosis of COVID-19 from Crowdsourced Respiratory Sound Data.
Chloe Brown, Jagmohan Chauhan, Andreas Grammenos, Jing Han, Apinan Hasthanasombat, Dimitris Spathis, Tong Xia, Pietro Cicuta, Cecilia Mascolo.
In Proceedings of the ACM Conference on Knowledge Discovery and Data (KDD). Health Day: AI for COVID. August 2020. [PDF], [Video]