COVID-19 Severity Classification Using Hybrid Feature Extraction: Integrating Persistent Homology, Convolutional Neural Networks and Vision Transformers

Redet Assefa,A. Mamuye,Marco Piangerelli

Published 2025 in Big Data and Cognitive Computing

ABSTRACT

This paper introduces a model that automates the diagnosis of a patient’s condition, reducing reliance on highly trained professionals, particularly in resource-constrained settings. To ensure data consistency, the dataset was preprocessed for uniformity in size, format, and color channels. Image quality was further enhanced using histogram equalization to improve the dynamic range. Lung regions were isolated using segmentation techniques, which also eliminated extraneous areas from the images. A modified segmentation-based cropping technique was employed to define an optimal cropping rectangle. Feature extraction was performed using persistent homology, deep learning, and hybrid methodologies. Persistent homology captured topological features across multiple scales, while the deep learning model leveraged convolutional transition equivariance, input-adaptive weighting, and the global receptive field provided by Vision Transformers. By integrating features from both methods, the classification model effectively predicted severity levels (mild, moderate, severe). The segmentation-based cropping method showed a modest improvement, achieving 80% accuracy, while stand-alone persistent homology features reached 66% accuracy. Notably, the hybrid model outperformed existing approaches, including SVM, ResNet50, and VGG16, achieving an accuracy of 82%.

PUBLICATION RECORD

  • Publication year

    2025

  • Venue

    Big Data and Cognitive Computing

  • Publication date

    2025-03-31

  • Fields of study

    Medicine, Computer Science

  • Identifiers
  • External record

    Open on Semantic Scholar

  • Source metadata

    Semantic Scholar

CITATION MAP

EXTRACTION MAP

CLAIMS

  • No claims are published for this paper.

CONCEPTS

  • No concepts are published for this paper.

REFERENCES

Showing 1-33 of 33 references · Page 1 of 1