Feature Extraction in Eye Images Using Convolutional Neural Network to Determine Cataract Disease

Authors

  • Fitra Rizki Ramdhani Universitas Bumigora, Mataram, Indonesia
  • Khasnur Hidjah Universitas Bumigora, Mataram, Indonesia
  • Muhammad Zulfikri Universitas Bumigora, Mataram, Indonesia
  • Hairani Hairani Universitas Bumigora, Mataram, Indonesia
  • Mayadi Mayadi University Teknologi Mara, Sarawak, Malaysia
  • Ni Gusti ayu Dasriani Universitas Bumigora, Mataram, Indonesia
  • Juvinal Ximenes Guterres Universidade Nacional Timor Lorosa'e, Dili, Timor-Leste

DOI:

https://doi.org/10.30812/ijecsa.v4i2.5064

Keywords:

Cataract, Convolutional Neural Networks, Model Testing, Optimizer

Abstract

The eye is one of the vital human senses and serves as the main organ for vision. One of the visual impairments that requires special attention is blindness, and cataracts are a major cause of it. A cataract is a condition in which the eye’s lens becomes cloudy due to changes in the lens fibers or materials inside the capsule. This cloudiness blocks light from entering the eye and reaching the retina, significantly interfering with vision. Early detection of cataracts is essential to prevent blindness. An efficient image-based classification model is needed for cataract detection. This study aims to test the Convolutional Neural Network (CNN) model for early cataract detection by exploring the use of several optimization algorithms: Adaptive Moment Estimation (Adam), Root Mean Square Propagation (RMSprop), Adaptive Gradient Algorithm (AdaGrad), and Stochastic Gradient Descent (SGD). The research method follows an experimental approach, where eye image datasets are trained using the same CNN architecture but with different parameter configurations. The results show that the Adam optimizer, with a data split of 70% for training, 15% for validation, and 15% for testing over 50 epochs, produced the best results, achieving accuracies of 94%, 93%, and 93%, respectively. Other optimizers performed reasonably well but could not match Adam's stability and accuracy. The implication of this research is that the choice of optimizer and hyperparameter configuration plays a crucial role in improving the performance of image-based cataract detection models.

Downloads

Download data is not yet available.

References

[1] S. Agustin, E. N. Putri, and I. N. Ichsan, “Design of A Cataract Detection System based on The Convolutional Neural Network,” J. ELTIKOM J. Tek. Elektro, Teknol. Inf. dan Komput., vol. 8, no. 1, pp. 1–8, Jun. 2024, doi: 10.31961/ELTIKOM.V8I1.1019.

[2] V.-V. Nguyen and C.-L. Lin, “Enhancing Cataract Detection through Hybrid CNN Approach and Image Quadration: A Solution for Precise Diagnosis and Improved Patient Care,” Electronics, vol. 13, no. 12, p. 2344, Jun. 2024, doi: 10.3390/electronics13122344.

[3] A. B. Triyadi, A. Bustamam, and P. Anki, “Deep Learning in Image Classification using VGG-19 and Residual Networks for Cataract Detection,” in 2022 2nd International Conference on Information Technology and Education (ICIT&E), Jan. 2022, pp. 293–297. doi: 10.1109/ICITE54466.2022.9759886.

[4] C.-J. Lai, P.-F. Pai, M. Marvin, H.-H. Hung, S.-H. Wang, and D.-N. Chen, “The Use of Convolutional Neural Networks and Digital Camera Images in Cataract Detection,” Electronics, vol. 11, no. 6, p. 887, Mar. 2022, doi: 10.3390/electronics11060887.

[5] T. Ganokratanaa, M. Ketcham, and P. Pramkeaw, “Advancements in Cataract Detection: The Systematic Development of LeNet-Convolutional Neural Network Models,” J. Imaging 2023, Vol. 9, Page 197, vol. 9, no. 10, p. 197, Sep. 2023, doi: 10.3390/JIMAGING9100197.

[6] I. Santoso, A. M. Manurung, and E. R. Subhiyakto, “Comparison of ResNet-50, EfficientNet-B1, and VGG-16 Algorithms for Cataract Eye Image Classification,” J. Appl. Informatics Comput., vol. 9, no. 2, pp. 284–294, Mar. 2025, doi: 10.30871/JAIC.V9I2.8968.

[7] T. D. Fikri, R. Sigit, D. M. Sari, and D. P. Trianggadewi, “Early Detection and Classification of Cataracts Using Smartphone Imagery Based on Support Vector Machine (SVM) and Certainly Factor Methods,” in 2024 International Electronics Symposium (IES), Aug. 2024, pp. 669–674. doi: 10.1109/IES63037.2024.10665769.

[8] M. Ali, S. Yudono, F. Fazlur Ridha, D. Mardiyana, F. Al-Ghozi, and A. Maulana, “Klasifikasi Katarak Berdasarkan Optic Disc Citra Fundus Smartphone: Perbandingan Ekstraksi Ciri Tekstur Dan Metode Neural Network,” J. Teknol. Inf. dan Ilmu Komput., vol. 12, no. 1, pp. 203–212, Feb. 2025, doi: 10.25126/JTIIK.20251219254.

[9] M. A. Alghozali, D. P. Pamungkas, and U. Mahdiyah, “Implementasi Metode Transformasi Wavelet Diskrit Dengan K-Nearest Neighbor Untuk Klasifikasi Penyakit Mata,” Nusant. Eng., vol. 7, no. 2, pp. 103–109, Oct. 2024, doi: 10.29407/NOE.V7I02.22889.

[10] R. Denandra, A. Fariza, and Y. R. Prayogi, “Eye Disease Classification Based on Fundus Images Using Convolutional Neural Network,” in 2023 International Electronics Symposium (IES), Aug. 2023, pp. 563–568. doi: 10.1109/IES59143.2023.10242558.

[11] V. Wulandari and A. T. Putra, “Optimization of the Convolutional Neural Network Method Using Fine-Tuning for Image Classification of Eye Disease,” Recursive J. Informatics, vol. 2, no. 1, pp. 54–61, Mar. 2024, doi: 10.15294/rji.v2i1.73625.

[12] Muhammad Nur Ihsan Muhlashin and A. Stefanie, “Klasifikasi Penyakit Mata Berdasarkan Citra Fundus Menggunakan YOLO V8,” JATI (Jurnal Mhs. Tek. Inform., vol. 7, no. 2, pp. 1363–1368, Sep. 2023, doi: 10.36040/jati.v7i2.6927.

[13] M. Maspaeni, B. Imran, A. Hidayat, and S. Erniwati, “Implementasi Machine Learning untuk Mendeteksi Penyakit Katarak menggunakan Kombinasi Ekstraksi Fitur dan Neural Network Berdasarkan Citra,” JTIM J. Teknol. Inf. dan Multimed., vol. 7, no. 2, pp. 232–251, Mar. 2025, doi: 10.35746/jtim.v7i2.621.

[14] M. D. Pratama, R. Gustriansyah, and E. Purnamasari, “Klasifikasi Penyakit Daun Pisang menggunakan Convolutional Neural Network (CNN),” J. Teknol. Terpadu, vol. 10, no. 1, pp. 1–6, Jul. 2024, doi: 10.54914/jtt.v10i1.1167.

[15] L. G. R. Putra, K. Marzuki, and H. Hairani, “Correlation-based feature selection and Smote-Tomek Link to improve the performance of machine learning methods on cancer disease prediction,” Eng. Appl. Sci. Res., vol. 50, no. 6, pp. 577–583, 2023, doi: 10.14456/easr.2023.59.

[16] C. M. Lauw, H. Hairani, I. Saifuddin, J. X. Guterres, M. M. Huda, and M. Mayadi, “Combination of Smote and Random Forest Methods for Lung Cancer Classification,” Int. J. Eng. Comput. Sci. Appl., vol. 2, no. 2, pp. 59–64, Jan. 2023, doi: 10.30812/ijecsa.v2i2.3333.

[17] H. Hairani, M. Janhasmadja, A. Tholib, J. X. Guterres, and Y. Ariyanto, “Thesis Topic Modeling Study : Latent Dirichlet Allocation ( LDA ) and Machine Learning Approach,” Int. J. Eng. Comput. Sci. Appl., vol. 3, no. 2, pp. 51–60, 2024, doi: 10.30812/ijecsa.v3i2.4375.Journal.

[18] A. I. Pradana and W. Wijiyanto, “Identifikasi Jenis Kelamin Otomatis Berdasarkan Mata Manusia Menggunakan Convolutional Neural Network (CNN) dan Haar Cascade Classifier,” G-Tech J. Teknol. Terap., vol. 8, no. 1, pp. 502–511, Jan. 2024, doi: 10.33379/gtech.v8i1.3814.

[19] L. Syafa’ah, R. Prasetyono, and H. Hariyady, “Enhancing Qur’anic Recitation Experience with CNN and MFCC Features for Emotion Identification,” Kinet. Game Technol. Inf. Syst. Comput. Network, Comput. Electron. Control, vol. 9, no. 2, pp. 181–192, May 2024, doi: 10.22219/kinetik.v9i2.2007.

[20] E. C. Seyrek and M. Uysal, “A comparative analysis of various activation functions and optimizers in a convolutional neural network for hyperspectral image classification,” Multimed. Tools Appl., vol. 83, no. 18, pp. 53785–53816, Nov. 2023, doi: 10.1007/s11042-023-17546-5.

[21] H. Hairani and T. Widiyaningtyas, “Augmented Rice Plant Disease Detection with Convolutional Neural Networks,” INTENSIF J. Ilm. Penelit. dan Penerapan Teknol. Sist. Inf., vol. 8, no. 1, pp. 27–39, Feb. 2024, doi: 10.29407/intensif.v8i1.21168.

Downloads

Published

2025-07-02

How to Cite

[1]
Fitra Rizki Ramdhani, “Feature Extraction in Eye Images Using Convolutional Neural Network to Determine Cataract Disease”, IJECSA, vol. 4, no. 2, pp. 81–90, Jul. 2025, doi: 10.30812/ijecsa.v4i2.5064.