Banca de DEFESA: CLEVERSON MARQUES VIEIRA

Uma banca de DEFESA de MESTRADO foi cadastrada pelo programa.
STUDENT : CLEVERSON MARQUES VIEIRA
DATE: 28/02/2024
TIME: 09:00
LOCAL: DCOMP
TITLE:

Explainable Artificial Intelligence (XAI) applied to the classification of retinography images to support the diagnosis of Glaucoma.


KEY WORDS:

Artificial Intelligence, Machine Learning, Explainable Artificial Intelligence, Glaucoma.


PAGES: 122
BIG AREA: Ciências Exatas e da Terra
AREA: Ciência da Computação
SUBÁREA: Sistemas de Computação
SPECIALTY: Software Básico
SUMMARY:

Machine learning models are being used extensively in several areas of knowledge and have numerous applications in almost all segments of human activity. In healthcare, the use of artificial intelligence techniques has revolutionized the diagnosis of diseases with excellent performance in image classification. Although these models have achieved extraordinary results, the lack of explainability of the decisions made by the models has been a significant limitation to the widespread adoption of these techniques in clinical practice. Glaucoma is a neurodegenerative eye disease that can lead to blindness irreversibly. Its early detection is crucial to prevent vision loss. Automated detection of glaucoma has been the subject of intense research in computer vision, with several studies proposing the use of convolutional neural networks (CNNs) to analyze retinal fundus images and diagnose the disease automatically. However, these proposals lack explainability, which is crucial for ophthalmologists to understand the decisions made by the models and be able to justify them to their patients. This work aims to explore and apply explainable artificial intelligence (XAI) techniques to different convolutional neural network (CNN) architectures for the classification of glaucoma and perform a comparative analysis on which explanation methods provide the best features for human interpretation to support clinical diagnosis. An approach for visual interpretation called SCIM (SHAP-CAM interpretable mapping) is proposed showing promising results. Preliminary experiments indicate that in a non-clinical look, interpretability techniques based on gradient-weighted class activation mapping (Grad-CAM) and the proposed approach (SCIM) applied to the VGG architecture19 provide the best features for human interpretability.


BANKING MEMBERS:
Presidente - 2325597 - DIEGO ROBERTO COLOMBO DIAS
Interno - 1674068 - LEONARDO CHAVES DUTRA DA ROCHA
Interno - 1985872 - EDIMILSON BATISTA DOS SANTOS
Externo à Instituição - RODRIGO BONACIN
Notícia cadastrada em: 07/02/2024 15:01
SIGAA | NTInf - Núcleo de Tecnologia da Informação - | Copyright © 2006-2024 - UFSJ - sigaa04.ufsj.edu.br.sigaa04