Automate facial paralysis detection using vgg architectures

Authors

  • Abbas Khalifa Nawar Department of Computer Techniques Engineering, Imam Al-Kadhum College (IKC), Baghdad, Iraq
  • Hadi Raheem Ali Department of Computer Techniques Engineering, Imam Al-Kadhum College (IKC), Baghdad, Iraq
  • Mothefer Majeed Jahefer Department of Computer Techniques Engineering, Imam Al-Kadhum College (IKC), Baghdad, Iraq
  • Sabah Abdulazeez Jebur Department of Computer Techniques Engineering, Imam Al-Kadhum College (IKC), Baghdad, Iraq

DOI:

https://doi.org/10.47957/ijciar.v7i1.158

Keywords:

Facial paralysis, Deep learning, YFP, VGG, CNN

Abstract

Facial Paralysis (FP) is a debilitating condition that affects individuals worldwide by impairing their ability to control facial muscles and resulting in significant physical and emotional challenges. Precise and prompt identification of FP is crucial for appropriate medical intervention and treatment. With the advancements in deep learning techniques, specifically Convolutional Neural Networks (CNNs), there has been growing interest in utilising these models for automated FP detection. This paper investigates the effectiveness of CNN architectures to identify patients with facial paralysis. The proposed method leveraged the depth and simplicity of Visual Geometry Group (VGG) architectures to capture the intricate relationships within facial images and accurately classify individuals with FP on the YouTube Facial Palsy (YFP) dataset. The dataset consists of 2000 images categorised into individuals with FP and non-injured individuals. Data augmentation techniques were used to improve the robustness and generalisation of the approach proposed. The proposed model consists of a features extraction module utilising the VGG network and a classification module with a Softmax classifier. The performance evaluation metrics include accuracy, recall, precision and F1-score. Experimental results demonstrate that the VGG16 model scored an accuracy of 88.47% with a recall of 83.55%, precision of 92.15% and F1-score of 87.64%. The VGG19 model attained level of precision of 81.95%, with a recall of 72.44%, precision of 88.58% and F1-score of 79.70%. The VGG16 model outperformed the VGG19 model in terms of accuracy, recall, precision, and F1-score. The results indicate that VGG architectures are effective in identifying patients with facial paralysis.

Downloads

Download data is not yet available.

Author Biographies

Abbas Khalifa Nawar, Department of Computer Techniques Engineering, Imam Al-Kadhum College (IKC), Baghdad, Iraq

Department of Computer Techniques Engineering, Imam Al-Kadhum College (IKC), Baghdad, Iraq

Hadi Raheem Ali, Department of Computer Techniques Engineering, Imam Al-Kadhum College (IKC), Baghdad, Iraq

Department of Computer Techniques Engineering, Imam Al-Kadhum College (IKC), Baghdad, Iraq

Mothefer Majeed Jahefer, Department of Computer Techniques Engineering, Imam Al-Kadhum College (IKC), Baghdad, Iraq

Department of Computer Techniques Engineering, Imam Al-Kadhum College (IKC), Baghdad, Iraq

Sabah Abdulazeez Jebur, Department of Computer Techniques Engineering, Imam Al-Kadhum College (IKC), Baghdad, Iraq

Department of Computer Techniques Engineering, Imam Al-Kadhum College (IKC), Baghdad, Iraq

References

G.-S. J. Hsu, J.-H. Kang, and W.-F. Huang, “Deep hierarchical network with line segment learning for quantitative analysis of facial palsy,” IEEE Access, vol. 7, pp. 4833–4842, 2018. DOI: https://doi.org/10.1109/ACCESS.2018.2884969

Y. Xia, C. Nduka, R. Y. Kannan, E. Pescarini, J. E. Berner, and H. Yu, “AFLFP: A Database With Annotated Facial Landmarks for Facial Palsy,” IEEE Trans. Comput. Soc. Syst., 2022. DOI: https://doi.org/10.1109/TCSS.2022.3187622

G. S. Parra-Dominguez, C. H. Garcia-Capulin, and R. E. Sanchez-Yanez, “Automatic Facial Palsy Diagnosis as a Classification Problem Using Regional Information Extracted from a Photograph,” Diagnostics, vol. 12, no. 7, p. 1528, 2022. DOI: https://doi.org/10.3390/diagnostics12071528

T. Wang, S. Zhang, L. Liu, G. Wu, and J. Dong, “Automatic facial paralysis evaluation augmented by a cascaded encoder network structure,” IEEE Access, vol. 7, pp. 135621–135631, 2019. DOI: https://doi.org/10.1109/ACCESS.2019.2942143

H. Kim, J. Park, H. Kim, E. Hwang, and S. Rho, “Robust facial landmark extraction scheme using multiple convolutional neural networks,” Multimed. Tools Appl., vol. 78, pp. 3221–3238, 2019. DOI: https://doi.org/10.1007/s11042-018-6482-7

X. Liu, Y. Xia, H. Yu, J. Dong, M. Jian, and T. D. Pham, “Region based parallel hierarchy convolutional neural network for automatic facial nerve paralysis evaluation,” IEEE Trans. Neural Syst. Rehabil. Eng., vol. 28, no. 10, pp. 2325–2332, 2020. DOI: https://doi.org/10.1109/TNSRE.2020.3021410

O. O. Abayomi-Alli, R. Damaševičius, R. Maskeliūnas, and S. Misra, “Few-shot learning with a novel voronoi tessellation-based image augmentation method for facial palsy detection,” Electronics, vol. 10, no. 8, p. 978, 2021. DOI: https://doi.org/10.3390/electronics10080978

Z. Guo, W. Li, J. Dai, J. Xiang, and G. Dan, “Facial imaging and landmark detection technique for objective assessment of unilateral peripheral facial paralysis,” Enterp. Inf. Syst., vol. 16, no. 10–11, pp. 1556–1572, 2022. DOI: https://doi.org/10.1080/17517575.2021.1872108

S. A. Jebur, M. A. Mohammed, and A. K. Abdulhassan, “Covid-19 detection using medical images,” in AIP Conference Proceedings, 2023, vol. 2591, no. 1, p. 30030. DOI: https://doi.org/10.1063/5.0119758

L. R. Ali, S. A. Jebur, M. M. Jahefer, and B. N. Shaker, “Employing Transfer Learning for Diagnosing COVID-19 Disease.,” Int. J. Online Biomed. Eng., vol. 18, no. 15, 2022. DOI: https://doi.org/10.3991/ijoe.v18i15.35761

S. A. Jebur, K. A. Hussein, H. K. Hoomod, L. Alzubaidi, and J. Santamaría, “Review on Deep Learning Approaches for Anomaly Event Detection in Video Surveillance,” Electronics, vol. 12, no. 1, p. 29, 2022. DOI: https://doi.org/10.3390/electronics12010029

S. A. Jebur, K. A. Hussein, and H. K. Hoomod, “Abnormal Behavior Detection in Video Surveillance Using Inception-v3 Transfer Learning Approaches,” IRAQI J. Comput. Commun. Control Syst. Eng., vol. 23, no. 2, pp. 210–221, 2023. DOI: https://doi.org/10.33103/uot.ijccce.23.2.16

J. Barbosa et al., “Efficient quantitative assessment of facial paralysis using iris segmentation and active contour-based key points detection with hybrid classifier,” BMC Med. Imaging, vol. 16, no. 1, pp. 1–18, 2016. DOI: https://doi.org/10.1186/s12880-016-0117-0

Z. Guo et al., “Deep assessment process: Objective assessment process for unilateral peripheral facial paralysis via deep convolutional neural network,” in 2017 IEEE 14th international symposium on biomedical imaging (ISBI 2017), 2017, pp. 135–138. DOI: https://doi.org/10.1109/ISBI.2017.7950486

S. Anping, X. Guoliang, D. Xuehai, S. Jiaxin, X. Gang, and Z. Wu, “Assessment for facial nerve paralysis based on facial asymmetry,” Australas. Phys. Eng. Sci. Med., vol. 40, pp. 851–860, 2017. DOI: https://doi.org/10.1007/s13246-017-0597-4

A. Song, Z. Wu, X. Ding, Q. Hu, and X. Di, “Neurologist standard classification of facial nerve paralysis with deep neural networks,” Futur. Internet, vol. 10, no. 11, p. 111, 2018. DOI: https://doi.org/10.3390/fi10110111

G.-S. J. Hsu, W.-F. Huang, and J.-H. Kang, “Hierarchical Network for Facial Palsy Detection.,” in CVPR Workshops, 2018, pp. 580–586.

M. Sajid, T. Shafique, M. J. A. Baig, I. Riaz, S. Amin, and S. Manzoor, “Automatic grading of palsy using asymmetrical facial features: a study complemented by new solutions,” Symmetry (Basel)., vol. 10, no. 7, p. 242, 2018. DOI: https://doi.org/10.3390/sym10070242

J. Barbosa, W.-K. Seo, and J. Kang, “paraFaceTest: an ensemble of regression tree-based facial features extraction for efficient facial paralysis classification,” BMC Med. Imaging, vol. 19, no. 1, pp. 1–14, 2019. DOI: https://doi.org/10.1186/s12880-019-0330-8

S. A. Jebur, K. A. Hussein, H. K. Hoomod, and L. Alzubaidi, “Novel Deep Feature Fusion Framework for Multi-Scenario Violence Detection,” Computers, vol. 12, no. 9, p. 175, 2023. DOI: https://doi.org/10.3390/computers12090175

L. Alzubaidi et al., “Review of deep learning: Concepts, CNN architectures, challenges, applications, future directions,” J. big Data, vol. 8, no. 1, pp. 1–74, 2021. DOI: https://doi.org/10.1186/s40537-021-00444-8

L. R. Al-Khazraji, A. R. Abbas, and A. S. Jamil, “The Effect of Changing Targeted Layers of the Deep Dream Technique Using VGG-16 Model.,” Int. J. Online Biomed. Eng., vol. 19, no. 3, 2023. DOI: https://doi.org/10.3991/ijoe.v19i03.37235

E. M. Imah and A. Wintarti, “Violence Classification Using Support Vector Machine and Deep Transfer Learning Feature Extraction,” in 2021 International Seminar on Intelligent Technology and Its Applications (ISITIA), 2021, pp. 337–342.

J. Xia, Y. Ding, and L. Tan, “Urban remote sensing scene recognition based on lightweight convolution neural network,” IEEE Access, vol. 9, pp. 26377–26387, 2021. DOI: https://doi.org/10.1109/ACCESS.2021.3057868

A. Kareem, H. Liu, and V. Velisavljevic, “A federated learning framework for pneumonia image detection using distributed data,” Healthc. Anal., p. 100204, 2023. DOI: https://doi.org/10.1016/j.health.2023.100204

A. Arora, A. Sinha, K. Bhansali, R. Goel, I. Sharma, and A. Jayal, “SVM and Logistic Regression for Facial Palsy Detection Utilizing Facial Landmark Features,” in Proceedings of the 2022 Fourteenth International Conference on Contemporary Computing, 2022, pp. 43–48. DOI: https://doi.org/10.1145/3549206.3549216

C. A. Sari, F. A. Bachtiar, L. Muflikhah, and A. Widayati, “Facial Palsy Detection Through Changes in Facial Muscle Functionality Using CNN Algorithm,” in 2023 6th International Conference of Computer and Informatics Engineering (IC2IE), 2023, pp. 297–302. DOI: https://doi.org/10.1109/IC2IE60547.2023.10331016

Published

09-02-2024

How to Cite

Khalifa, A. N., Ali, H. R., Abdulazeez Jebur, S., & Jahefer, S. A. (2024). Automate facial paralysis detection using vgg architectures. International Journal of Current Innovations in Advanced Research, 7(1), 1–8. https://doi.org/10.47957/ijciar.v7i1.158

Issue

Section

Original Articles

Citations