Por favor, use este identificador para citar o enlazar este ítem: http://hdl.handle.net/11531/107351
Registro completo de metadatos
Campo DC Valor Lengua/Idioma
dc.contributor.authorPolo Molina, Alejandroes-ES
dc.contributor.authorAlfaya Sánchez, Davides-ES
dc.contributor.authorPortela González, Josées-ES
dc.date.accessioned2025-11-25T14:32:55Z-
dc.date.available2025-11-25T14:32:55Z-
dc.date.issued2026-04-01es_ES
dc.identifier.issn0893-6080es_ES
dc.identifier.urihttps:doi.org10.1016j.neunet.2025.108278es_ES
dc.identifier.urihttp://hdl.handle.net/11531/107351-
dc.descriptionArtículos en revistases_ES
dc.description.abstractArtificial Neural Networks (ANNs) have significantly advanced various fields by effectively recognizing patterns and solving complex problems. Despite these advancements, their interpretability remains a critical challenge, especially in applications where transparency and accountability are essential. To address this, explainable AI (XAI) has made progress in demystifying ANNs, yet interpretability alone is often insufficient. In certain applications, model predictions must align with expert-imposed requirements, sometimes exemplified by partial monotonicity constraints. While monotonic approaches are found in the literature for traditional Multi-layer Perceptrons (MLPs), they still face difficulties in achieving both interpretability and certified partial monotonicity. Recently, the Kolmogorov-Arnold Network (KAN) architecture, based on learnable activation functions parametrized as splines, has been proposed as a more interpretable alternative to MLPs. Building on this, we introduce a novel ANN architecture called MonoKAN, which is based on the KAN architecture and achieves certified partial monotonicity while enhancing interpretability. To achieve this, we employ cubic Hermite splines, which guarantee monotonicity through a set of straightforward conditions. Additionally, by using positive weights in the linear combinations of these splines, we ensure that the network preserves the monotonic relationships between input and output. Our experiments demonstrate that MonoKAN not only enhances interpretability but also improves predictive performance across the majority of benchmarks, outperforming state-of-the-art monotonic MLP approaches.es-ES
dc.description.abstractArtificial Neural Networks (ANNs) have significantly advanced various fields by effectively recognizing patterns and solving complex problems. Despite these advancements, their interpretability remains a critical challenge, especially in applications where transparency and accountability are essential. To address this, explainable AI (XAI) has made progress in demystifying ANNs, yet interpretability alone is often insufficient. In certain applications, model predictions must align with expert-imposed requirements, sometimes exemplified by partial monotonicity constraints. While monotonic approaches are found in the literature for traditional Multi-layer Perceptrons (MLPs), they still face difficulties in achieving both interpretability and certified partial monotonicity. Recently, the Kolmogorov-Arnold Network (KAN) architecture, based on learnable activation functions parametrized as splines, has been proposed as a more interpretable alternative to MLPs. Building on this, we introduce a novel ANN architecture called MonoKAN, which is based on the KAN architecture and achieves certified partial monotonicity while enhancing interpretability. To achieve this, we employ cubic Hermite splines, which guarantee monotonicity through a set of straightforward conditions. Additionally, by using positive weights in the linear combinations of these splines, we ensure that the network preserves the monotonic relationships between input and output. Our experiments demonstrate that MonoKAN not only enhances interpretability but also improves predictive performance across the majority of benchmarks, outperforming state-of-the-art monotonic MLP approaches.en-GB
dc.language.isoen-GBes_ES
dc.sourceRevista: Neural Networks, Periodo: 1, Volumen: online, Número: , Página inicial: 108278-1, Página final: 108278-16es_ES
dc.subject.otherInstituto de Investigación Tecnológica (IIT)es_ES
dc.titleMonoKAN: Certified monotonic Kolmogorov-Arnold networkes_ES
dc.typeinfo:eu-repo/semantics/articlees_ES
dc.description.versioninfo:eu-repo/semantics/publishedVersiones_ES
dc.rights.holderes_ES
dc.rights.accessRightsinfo:eu-repo/semantics/openAccesses_ES
dc.keywordsArtificial neural network; Kolmogorov-Arnold network; Certified partial monotonic ANN; Explainable artificial intelligencees-ES
dc.keywordsArtificial neural network; Kolmogorov-Arnold network; Certified partial monotonic ANN; Explainable artificial intelligenceen-GB
Aparece en las colecciones: Artículos

Ficheros en este ítem:
Fichero Descripción Tamaño Formato  
IIT-26-009R_preprint3,4 MBUnknownVisualizar/Abrir
IIT-26-009R_preview3,62 kBUnknownVisualizar/Abrir


Los ítems de DSpace están protegidos por copyright, con todos los derechos reservados, a menos que se indique lo contrario.