Cloud-Based RetinaNet Framework for Accurate Detection of Glaucoma in Retinal Images
Keywords:
Glaucoma detection, RetinaNet, Cloud-based framework, Retinal images, Diagnostic toolsAbstract
The Cloud-Based RetinaNet Framework seeks to provide an accurate approach for glaucoma detection in retinal pictures, improving diagnostic precision and scalability. Glaucoma, marked by gradual optic nerve deterioration, may result in permanent blindness if not identified. This system utilizes RetinaNet's focused loss technique, particularly designed to tackle class imbalance in medical datasets, hence facilitating the effective detection of glaucoma signs. The framework employs cloud-based infrastructure to provide high-performance processing of extensive retinal imaging datasets, enabling swift analysis and remote access to outcomes. The main aim is to develop a dependable, automated method proficient in identifying early-stage glaucoma, including critical indicators like optic disc cupping and retinal nerve fiber layer thinning. The objective is to enhance diagnostic efficiency, provide prompt therapies, and aid ophthalmologists in making precise evaluations, hence minimizing the risk of vision loss and enhancing patient outcomes. The suggested methodology has significant potential for revolutionizing glaucoma detection in clinical practice. Results from the RIM-ONE_DL dataset indicate that the proposed RetinaNet system attains an overall accuracy of 97.2%, with a sensitivity of 97.6% and a specificity of 96.8%.
References
[1]. D. Gedela, S. V. S. Krishna, R. D. Kumar, and P. Keerthika, “Lung-Retinanet Lung Cancer Detection Using A Retinanet With Multi-Scale Feature Fusion And Context Module,” 15th International Conference on Computing Communication and Networking Technologies, vol. 15, no. 1, pp. 1–6, 2024.
[2]. A. Ikram, A. Imran, J. Li, A. Alzubaidi, S. Fahim, A. Yasin, and H. Fathi, “A Systematic Review on Fundus Image-Based Diabetic Retinopathy Detection and Grading: Current Status and Future Directions,” IEEE Access, vol. 12, no. 1, pp. 96273–96303, 2024.
[3]. R. Govind Kumar, M. R. Bharamagoudra, and H. V. Singh, “Diabetic Retinopathy Detection Leveraging Convolutional Neural Network Architectures,” 2024 International Conference on Smart Systems for Applications in Electrical Sciences, vol. 1, no. 1, pp. 1–6, 2024.
[4]. S. R. Thumala, V. Mane, T. Patil, P. Tambe and C. Inamdar, "Full Stack Video Conferencing App using TypeScript and NextJS," 3rd International Conference on Self Sustainable Artificial Intelligence Systems (ICSSAS), pp. 1285-1291, 2025.
[5]. R. Baimukashev, S. Kadyrov, and C. Turan, “Systematic Survey of Deep Fuzzy Computer Vision in Biomedical Research,” Fuzzy Information and Engineering, vol. 16, no. 3, pp. 220–243, 2024.
[6]. X. Wang, C. Zhang, Z. Qiang, C. Liu, X. Wei, and F. Cheng, “A Coffee Plant Counting Method Based on Dual-Channel NMS and YOLOv9 Leveraging UAV Multispectral Imaging,” Remote Sensing, vol. 16, no. 20, pp. 1–20, 2024.
[7]. T. N. Pham, V. H. Nguyen, K. R. Kwon, J. H. Kim, and J. H. Huh, “Improved YOLOv5 Based Deep Learning System for Jellyfish Detection,” IEEE Access, vol. 12, no. 1, pp. 87838–87849, 2024.
[8]. N. Ganesh, A. Sriram, S. N. Krishnan, and T. S. Rao, “Simultaneous enhancement and detection of brain tumors using GAN,” in Intelligent Computing—Proceedings of the Computing Conference. Cham, Switzerland: Springer Nature Switzerland, pp. 206–220, 2025.
[9]. R. Arifando, S. Eto, and C. Wada, “Enhancing Bus Number Detection Efficiency with Transfer Learning and Fine-Tuned Optical Character Recognition,” 1st International Conference on Robotics, Engineering, Science, and Technology, vol. 1, no. 1, pp. 106–111, 2024.
[10]. J. Balasubramani, and R. Surendran, “An Innovative Gated Graph Recurrent Neural Network in Toddler Autism Prediction,” 2nd International Conference on Self Sustainable Artificial Intelligence Systems, pp. 171-176, 2024.
[11]. S. Matta, M. Lamard, P. Zhang, A. L. Guilcher, L. Borderie, B. Cochener, and G. Quellec, “A Systematic Review of Generalization Research in Medical Image Classification,” arXiv preprint, vol. 2403, no. 12167, pp. 1–37, 2024.
[12]. V. Agrawal, V. Kumar, S. Sharma, R. Chawla, and K. Paul, “NetraDeep: An Integrated Deep Learning and Image Processing System for Precise Detection of Hard Exudates,” ACM Transactions on Computing for Healthcare, vol. 5, no. 4, pp. 1–34, 2024.
[13]. S. R. Thumala, “Importance of business continuity and disaster recovery (BCDR) methodologies for organizations: A comparison study between AWS and Azure,” International Journal of Science and Research (IJSR), vol. 11, no. 12, pp. 1406–1415, 2022.
[14]. V. Ramesh, “Performance benefits of reactive frameworks,” International Journal of Computer Applications, vol. 975, pp. 8887, 2025.
[15]. J. Balasubramani, and R. Surendran, “AwGhO_Res_Trans: Adaptive Weighting Grasshopper Optimization-Based Residual Transformer for Autism Detection,” 4th International Conference on Sustainable Expert Systems, pp. 866-872, 2024.
[16]. G. Pocevičė, P. Stefanovič, and S. Ramanauskaitė, “Approach for Tattoo Detection and Identification Based on YOLOv5 and Similarity Distance,” Applied Sciences, vol. 14, no. 13, pp. 1–16, 2024.
[17]. P. K. Shrivastava, S. Hasan, L. Abid, R. Injety, A. K. Shrivastav, and D. Sybil, “Accuracy of Machine Learning in the Diagnosis of Odontogenic Cysts and Tumors: A Systematic Review and Meta-Analysis,” Oral Radiology, vol. 40, no. 1, pp. 342–356, 2024.
[18]. H. Madathala, S. R. Thumala, B. Barmavat, and K. K. S. Prakash, “Functional consideration in cloud migration,” International Peer Reviewed/Refereed Multidisciplinary Journal (EIPRMJ), vol. 13, no. 2, 2024.
[19]. L. K. Singh, M. Khanna, and R. Singh, “Efficient Feature Selection for Breast Cancer Classification Using Soft Computing Approach: A Novel Clinical Decision Support System,” Multimedia Tools and Applications, vol. 83, no. 14, pp. 43223–43276, 2024.
[20]. Z. K. Siddiqui, M. Moin, and H. S. Imtiaz, “Revolutionizing Ophthalmology: The Empowering Role of Artificial Intelligence,” Pakistan Journal of Ophthalmology, vol. 40, no. 2, pp. 113–115, 2024

