SABiT-MNet: Scale-Adaptive Autoencoder with BiT-M Model for Identifying AMD Grades

Niveen Nasr El-Den,M. Elsharkawy,M. Ghazal,Ali H. Mahmoud,Harpal S. Sandhu,Hani Mahdi,A. El-Baz

Published 2025 in IEEE International Conference on Acoustics, Speech, and Signal Processing

ABSTRACT

Accurate early diagnosis is crucial in addressing Age-related Macular Degeneration (AMD), a chronic retinal disease that is a leading cause of blindness among the elderly. Medical imaging, particularly fundus imaging, is essential in facilitating timely detection and intervention. Due to the variability in image sizes within our dataset, this paper introduces the SABiT-MNet model, which effectively discriminates between healthy retinas, dry AMD, and wet AMD. The model integrates a novel scale-adaptive (SA) approach by combining an autoencoder with Big Transfer (BiT) as its backbone. Unlike traditional resizing methods, which often result in the loss of critical diagnostic information, the SA model dynamically adjusts to varying image sizes, preserving key retinal features essential for accurate diagnosis. The primary aim of this architecture is to retain crucial details in fundus images to ensure precise classification. In this study, 648 subjects were recruited through the Comparisons of AMD Treatments Trials study group, sponsored by the University of Pennsylvania. Experimental results demonstrate that the proposed SABiT-MNet model outperforms state-of-the-art approaches, including transformer-based models, achieving superior diagnostic accuracy. The model recorded performance metrics of 94% accuracy, 97% sensitivity, and 93.94% specificity. To further validate the robustness of the system, we tested it on the public ODiR dataset, where it achieved similarly promising results, confirming the effectiveness of our approach.

PUBLICATION RECORD

CITATION MAP

EXTRACTION MAP

CLAIMS

  • No claims are published for this paper.

CONCEPTS

  • No concepts are published for this paper.

REFERENCES

Showing 1-24 of 24 references · Page 1 of 1

CITED BY

  • No citing papers are available for this paper.

Showing 0-0 of 0 citing papers · Page 1 of 1