Por favor, use este identificador para citar o enlazar este ítem: http://hdl.handle.net/11531/101258
Registro completo de metadatos
Campo DC Valor Lengua/Idioma
dc.contributor.authorPizarroso Gonzalo, Jaimees-ES
dc.contributor.authorAlfaya Sánchez, Davides-ES
dc.contributor.authorPortela González, Josées-ES
dc.contributor.authorMuñoz San Roque, Antonioes-ES
dc.date.accessioned2025-07-16T12:21:20Z-
dc.date.available2025-07-16T12:21:20Z-
dc.date.issued2025-08-01es_ES
dc.identifier.issn1568-4946es_ES
dc.identifier.urihttps:doi.org10.1016j.asoc.2025.113300es_ES
dc.identifier.urihttp://hdl.handle.net/11531/101258-
dc.descriptionArtículos en revistases_ES
dc.description.abstractes-ES
dc.description.abstractAs Machine Learning models are considered for autonomous decisions with significant social impact, the need to understand how these models work rises rapidly. Explainable Artificial Intelligence (XAI) aims to provide interpretations for predictions made by Machine Learning models, in order to make the model trustworthy and more transparent for the user. For example, selecting relevant input variables for the problem directly impacts the model’s ability to learn and make accurate predictions. One of the main XAI techniques to obtain input variable importance is the sensitivity analysis based on partial derivatives. However, existing literature of this method provides no justification of the aggregation metrics used to retrieved information from the partial derivatives. In this paper, a theoretical framework is proposed to study sensitivities of ML models using metric techniques. From this metric interpretation, a complete family of new quantitative metrics called α-curves is extracted. These α-curves provide information with greater depth on the importance of the input variables for a machine learning model than existing XAI methods in the literature. We demonstrate the effectiveness of the α-curves using synthetic and real datasets, comparing the results against other XAI methods for variable importance and validating the analysis results with the ground truth or literature information.en-GB
dc.format.mimetypeapplication/octet-streames_ES
dc.language.isoen-GBes_ES
dc.sourceRevista: Applied Soft Computing, Periodo: 1, Volumen: online, Número: , Página inicial: 113300-1, Página final: 113300-19es_ES
dc.subject.otherInstituto de Investigación Tecnológica (IIT)es_ES
dc.titleMetric tools for sensitivity analysis with applications to neural networkses_ES
dc.typeinfo:eu-repo/semantics/articlees_ES
dc.description.versioninfo:eu-repo/semantics/publishedVersiones_ES
dc.rights.holderes_ES
dc.rights.accessRightsinfo:eu-repo/semantics/openAccesses_ES
dc.keywordses-ES
dc.keywordsSensitivity; Machine learning; Feature importance; Explainable A; Regressio; Feature engineering; Neural networksen-GB
Aparece en las colecciones: Artículos

Ficheros en este ítem:
Fichero Tamaño Formato  
IIT-25-171R_preview3,43 kBUnknownVisualizar/Abrir


Los ítems de DSpace están protegidos por copyright, con todos los derechos reservados, a menos que se indique lo contrario.