Skip to content

humaneBicycle/spirometer-android

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

18 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Audio Spirometer on Android

Abstract

This research focused on the development of an Android application for finding the two important lung parameters, FEV1 and FVC using the Android smartphone. The integration of a dataset collected from a previous study by Rishiraj Adhikary provided is used for analysis. By applying feature extraction, audio-based features were generated from the collected data.

Key Work Includes

  1. An Android application that uses a random forest classifier to predict the lung parameters, FEV1 and FVC.
  2. The application can also record other features like the orientation of the device, and information about a person such as age, weight, height, and gender, and export the information to a CSV file for further data analysis.
  3. Provides visual feedback for each maneuver, thus making it easier to record maneuvers.

Methodology

Feature Extraction:

Several features from the audio were initially taken such as the MFCC(Mel-frequency cepstral coefficients), Spectrogram (STFT or Short time Fourier Transform), MFC(Mel-frequency cepstral coefficients), Spectral Features, and Temporal Features. Still, according to the paper, High-Resolution Time-Frequency Spectrum-Based Lung Function Test from a Smartphone Microphone, a 12% error was observed when STFT features were used. They were also easier to extract and implement on Android, so we decided to continue with STFT features. They are generated using the Librosa library and flattened so that a random forest model can be trained.

Model Training

The STFT features are a 2d matrix of complex numbers. We flattened the matrix by taking the mean of the rows to train a random forest. Random Forest classifier from Scikit learn library was used to train the model. The extracted features from the audio are used to train the model and its performance is evaluated using the LOOCV technique.

Android Implementation

To make sure the model runs on Android devices, Open Neural Network Exchange(ONNX) was used. The model was first trained and then exported to ONNX format using the library skl2onnx. The ONNX is then converted to ort format and then run using ONNX Runtime on the Android device. A very useful article explaining this entire process is given. This article was also used to run the model on Android devices

The converted ORT model into an Android application, allowing real-time classification of audio inputs captured by the device's audio sensor. Extract the features from the Java equivalent library of librosa, called jLibrosa, flatten the output, and feed its output to the model. The FEV1 and FVC can then be predicted.

It is important to ensure that the audio is recorded with the same parameters used to record the data in the study. This ensures that the audio on which the model is trained is the same as that on the model.

This article was really helpful in the above process

alt text

Results

Accuracy of the model on the dataset: The model performed fairly accurately with mean absolute percentage errors of 6.05% for FEV1 and 5.77% for FVC respectively.

THIS REPOSITORY

  1. android_app contains the android app.
  2. deployed_model contains the versions of deployed models
  3. methodology contains the Method, Model and the data used to train the model.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published