Campo Dublin Core | Valor | Língua |
dc.contributor.advisor | Silva, Nilton Correia da | - |
dc.contributor.author | Silva, Miguel Pimentel da | - |
dc.identifier.citation | SILVA, Miguel Pimentel da. Feature Selection using SHAP: an explainable AI approach. 2021. 65 f. Trabalho de Conclusão de Curso (Bacharelado em Engenharia de Software)—Universidade de Brasília, Brasília, 2021. | pt_BR |
dc.description | Trabalho de Conclusão de Curso (graduação)—Universidade de Brasília, Faculdade UnB Gama, 2021. | pt_BR |
dc.rights | Acesso Aberto | pt_BR |
dc.subject.keyword | Inteligência artificial | pt_BR |
dc.title | Feature Selection using SHAP : an explainable AI approach | pt_BR |
dc.type | Trabalho de Conclusão de Curso - Graduação - Bacharelado | pt_BR |
dc.date.accessioned | 2021-11-11T01:14:34Z | - |
dc.date.available | 2021-11-11T01:14:34Z | - |
dc.date.submitted | 2021-05-23 | - |
dc.identifier.uri | https://bdm.unb.br/handle/10483/29178 | - |
dc.language.iso | Inglês | pt_BR |
dc.rights.license | A concessão da licença deste item refere-se ao termo de autorização impresso assinado pelo autor que autoriza a Biblioteca Digital da Produção Intelectual Discente da Universidade de Brasília (BDM) a disponibilizar o trabalho de conclusão de curso por meio do sítio bdm.unb.br, com as seguintes condições: disponível sob Licença Creative Commons 4.0 International, que permite copiar, distribuir e transmitir o trabalho, desde que seja citado o autor e licenciante. Não permite o uso para fins comerciais nem a adaptação desta. | pt_BR |
dc.description.abstract1 | In the last decade, Artificial Intelligence (AI) appears to be in many different areas in
human lives. Many times those AI models are based on complex algorithms and neural
networks, also called as black boxes. In recent years, tools have emerged with the objective
of explaining the operation of black boxes, i.e, SHAP. Studies have shown that these tools
can be used as a feature selection tool, which can improve the accuracy of the models
and reduce the computational costs of model training. The main objective of this work
is to understand how much explainability tools can assist in the feature selection process
from three perspectives: Performance, Training Time; and Accuracy. Those metrics were
evaluated based on two practical experiments. The first one using the Cancer Breast
Dataset and the second one using the Credit Card Fraud dataset. Each experiment was
carried out for the following models: Random Forest, XGBoost, Catboot, and LightGBM.
As result, we were able to conclude that SHAP, in addition to bringing explanability, can
bring performance gains in a machine learning model. | pt_BR |
Aparece na Coleção: | Engenharia de Software
|