نشریه علمی مهندسی پزشکی زیستی

Diagnosing COPD Disease through Lung Sounds Analysis with Focusing on Temporal Features and using a Network based on Temporal Attention and Bidirectional Recurrent Gates

Document Type : Full Research Paper

Authors

1 M.Sc. Student, Department of Electrical Engineering, Iran University of Science and Technology, Tehran, Iran

2 Professor, Department of Electrical Engineering, Iran University of Science and Technology, Tehran, Iran

Abstract
Examining the sound of body organs is one of the methods of diagnosing various diseases, which is used by specialists to analyze abnormal sounds. Since most of the deaths caused by the disease occur in poor countries that have a shortage of equipment and specialists, the development of diagnostic methods based on machine learning and audio processing, which are available, non-invasive and inexpensive, can lead to early diagnosis and save the lives of millions of people. In previous studies, inputs that reflect the frequency characteristics of the sound have been used, in this article, we also use a recurrent representation that reflects the temporal characteristics of the sound and is given as an input to convolutional networks in order to benefit from its transfer learning advantages. By adding the temporal attention mechanism and the bidirectional recurrent gates, the audio data sequence which is a time series is investigated and each data is weighted according to its value. The data used in this article is from the ICBHI lung sound database, which has been used in many other articles. The presented method was able to classify lung sounds into three categories: healthy, chronic obstructive pulmonary disease (COPD) and other diseases with an accuracy of 97%, which is a better result than other methods that used this database.

Keywords

Subjects


  1. Xu et al., “A Deep Learning System to Screen Novel Coronavirus Disease 2019 Pneumonia,” Engineering, vol. 6, no. 10, pp. 1122–1129, 2020, doi: 10.1016/j.eng.2020.04.010.
  2. Shi, K. Du, C. Zhang, H. Ma, and W. Yan, “Lung Sound Recognition Algorithm Based on VGGish-BiGRU,” IEEE Access, vol. 7, pp. 139438–139449, 2019.
  3. Demir, A. Sengur, and V. Bajaj, “Convolutional neural networks based efficient approach for classification of lung diseases,” Heal. Inf. Sci. Syst., vol. 8, no. 1, 2020.
  4. Altan, G., Kutlu, Y., & Allahverdi, N. Deep learning on computerized analysis of chronic obstructive pulmonary disease. IEEE Journal of Biomedical and Health Informatics, 24(5), 1344–1350.
  5. A. Saraiva et al., “Classification of images of childhood pneumonia using convolutional neural networks,” BIOIMAGING 2019 - 6th Int. Conf. Bioimaging, Proceedings; Part 12th Int. Jt. Conf. Biomed. Eng. Syst. Technol. BIOSTEC 2019, no. Biostec, pp. 112–119, 2019.
  6. Perna and A. Tagarelli, “Deep auscultation: Predicting respiratory anomalies and diseases via recurrent neural networks,” Proc. - IEEE Symp. Comput. Med. Syst., vol. 2019-June, pp. 50–55, 2019.
  7. Liu, S. Cai, K. Zhang, and N. Hu, “Detection of adventitious respiratory sounds based on convolutional neural network,” in 2019 International Conference on Intelligent Informatics and Biomedical Sciences (ICIIBMS). IEEE, 2019, pp. 298–303.
  8. I. Khan and R. B. Pachori, “Automated classification of lung sound signals based on empirical mode decomposition,” Expert Syst. Appl., vol. 184, no. October 2020, p. 115456, 2021.
  9. Içer and Ş. Gengeç, “Classification and analysis of non-stationary characteristics of crackle and rhonchus lung adventitious sounds,” Digit. Signal Process. A Rev. J., vol. 28, no. 1, pp. 18–27, 2014.
  10. Bahoura, “Pattern recognition methods applied to respiratory sounds classification into normal and wheeze classes,” Comput. Biol. Med., vol. 39, no. 9, pp. 824–843, 2009.
  11. Orjuela-Cañón, D. Alvaro, F. D. Gómez-Cajas, and R. Jiménez-Moreno, “Artificial Neural Networks for Acoustic Lung,” Prog. Pattern Recognition, Image Anal. Comput. Vision, Appl. Springer Int. Publ., pp. 214–221, 2014.
  12. Chen, X. Yuan, Z. Pei, M. Li, and J. Li, “Triple-Classification of Respiratory Sounds Using Optimized S-Transform and Deep Residual Networks,” IEEE Access, vol. 7, pp. 32845–32852, 2019.
  13. B. Shuvo, S. N. Ali, S. I. Swapnil, T. Hasan, and M. I. H. Bhuiyan, “A Lightweight CNN Model for Detecting Respiratory Diseases from Lung Auscultation Sounds using EMD-CWT-based Hybrid Scalogram,” vol. XX, no. Xx, pp. 1–9, 2020, [Online]. Available: http://arxiv.org/abs/2009.04402.
  14. X. A. Pramono, S. Bowyer, and E. Rodriguez-Villegas, Automatic adventitious respiratory sound analysis: A systematic review, vol. 12, no. 5. 2017.
  15. ICBHI dataset: https://bhichallenge.med.auth.gr/ICBHI_2017_Challenge
  16. K. Dwivedi and G. S. Member, “Algorithms for Automatic Analysis and Classification of Heart Sounds — A Systematic Review,” IEEE Access, vol. 7, pp. 8316–8345, 2019.
  17. Salman, N. Ahmadi, R. Mengko, A. Z. R. Langi, and T. L. R. Mengko, “Performance comparison of denoising methods for heart sound signal,” in 2015 International Symposium on Intelligent Signal Processing and Communication Systems (ISPACS), pp. 435–440, IEEE, Nusa Dua, Indonesia, 2015
  18. P. Eckmann, O. Oliffson Kamphorst, and D. Ruelle, “Recurrence plots of dynamical systems,” Epl, vol. 4, no. 9, pp. 973–977, 1987.
  19. Marwan, M. Carmen Romano, M. Thiel, and J. Kurths, “Recurrence plots for the analysis of complex systems,” Phys. Rep., vol. 438, no. 5–6, pp. 237–329, 2007.
  20. He, X. Zhang, S. Ren, and J. Sun, “Deep Residual Learning for Image Recognition.” [Online]. Available: http://image-net.org/challenges/LSVRC/2015/.
  21. Zhang, S. Xu, S. Zhang, T. Qiao, and S. Cao, “Learning Attentive Representations for Environmental Sound Classification,” IEEE Access, vol. 7, pp. 130327–130339, 2019.
  22. Schuster and K. K. Paliwal, “Bidirectional recurrent neural networks,” IEEE Trans. Signal Process., vol. 45, no. 11, pp. 2673–2681, 1997.
  23. AudioSet dataset: https://research.google.com/audioset/dataset/index.html
Volume 16, Issue 4
Winter 2023
Pages 295-307

  • Receive Date 29 September 2022
  • Revise Date 14 June 2023
  • Accept Date 06 August 2023