Diagnosis of COVID-19 via acoustic analysis and artificial intelligence by monitoring breath sounds on smartphones

This article was originally published here

J Biomed Inform. 27 Apr 2022;130:104078. doi: 10.1016/j.jbi.2022.104078. Online ahead of print.

ABSTRACT

Scientific evidence shows that acoustic analysis could be an indicator for diagnosing COVID-19. By analyzing breath sounds recorded on smartphones, it is discovered that patients with COVID-19 have different patterns in both the time domain and the frequency domain. These patterns are used in this article to diagnose COVID-19 infection. Sound signal statistics, frequency domain analysis, and Mel-Frequency Cepstral Coefficients (MFCCs) are then computed and applied in two classifiers, k-Nearest Neighbors (kNN) and Convolutional Neural Network (CNN), to diagnose whether a user is contracted with COVID-19 or not. The test results show that, surprisingly, more than 97% accuracy could be achieved with CNN classifier and more than 85% on kNN with optimized features. Optimization methods for selecting the best features and using various metrics to evaluate performance are also presented in this article. Due to the high accuracy of the CNN model, the CNN model was implemented in an Android application to diagnose COVID-19 with a probability to indicate the level of confidence. The initial medical test shows a similar test result between the method proposed in this article and the lateral flow method, indicating that the proposed method is feasible and effective. Due to the use of breathing sound and tested on the smartphone, this method could be used by everyone regardless of the availability of other medical resources, which could be a powerful tool for society to diagnose COVID-19 .

PMID:35489595 | PMC: PMC9044719 | DOI:10.1016/j.jbi.2022.104078


Source link

Comments are closed.