Mostrar el registro sencillo del ítem

dc.contributor.authorPolo Molina, Alejandroes-ES
dc.contributor.authorAlfaya Sánchez, Davides-ES
dc.contributor.authorPortela González, Josées-ES
dc.date.accessioned2025-11-25T05:07:38Z
dc.date.available2025-11-25T05:07:38Z
dc.date.issued2026-04-01es_ES
dc.identifier.issn0893-6080es_ES
dc.identifier.urihttps://doi.org/10.1016/j.neunet.2025.108278es_ES
dc.descriptionArtículos en revistases_ES
dc.description.abstractArtificial Neural Networks (ANNs) have significantly advanced various fields by effectively recognizing patterns and solving complex problems. Despite these advancements, their interpretability remains a critical challenge, especially in applications where transparency and accountability are essential. To address this, explainable AI (XAI) has made progress in demystifying ANNs, yet interpretability alone is often insufficient. In certain applications, model predictions must align with expert-imposed requirements, sometimes exemplified by partial monotonicity constraints. While monotonic approaches are found in the literature for traditional Multi-layer Perceptrons (MLPs), they still face difficulties in achieving both interpretability and certified partial monotonicity. Recently, the Kolmogorov-Arnold Network (KAN) architecture, based on learnable activation functions parametrized as splines, has been proposed as a more interpretable alternative to MLPs. Building on this, we introduce a novel ANN architecture called MonoKAN, which is based on the KAN architecture and achieves certified partial monotonicity while enhancing interpretability. To achieve this, we employ cubic Hermite splines, which guarantee monotonicity through a set of straightforward conditions. Additionally, by using positive weights in the linear combinations of these splines, we ensure that the network preserves the monotonic relationships between input and output. Our experiments demonstrate that MonoKAN not only enhances interpretability but also improves predictive performance across the majority of benchmarks, outperforming state-of-the-art monotonic MLP approaches.es-ES
dc.description.abstractArtificial Neural Networks (ANNs) have significantly advanced various fields by effectively recognizing patterns and solving complex problems. Despite these advancements, their interpretability remains a critical challenge, especially in applications where transparency and accountability are essential. To address this, explainable AI (XAI) has made progress in demystifying ANNs, yet interpretability alone is often insufficient. In certain applications, model predictions must align with expert-imposed requirements, sometimes exemplified by partial monotonicity constraints. While monotonic approaches are found in the literature for traditional Multi-layer Perceptrons (MLPs), they still face difficulties in achieving both interpretability and certified partial monotonicity. Recently, the Kolmogorov-Arnold Network (KAN) architecture, based on learnable activation functions parametrized as splines, has been proposed as a more interpretable alternative to MLPs. Building on this, we introduce a novel ANN architecture called MonoKAN, which is based on the KAN architecture and achieves certified partial monotonicity while enhancing interpretability. To achieve this, we employ cubic Hermite splines, which guarantee monotonicity through a set of straightforward conditions. Additionally, by using positive weights in the linear combinations of these splines, we ensure that the network preserves the monotonic relationships between input and output. Our experiments demonstrate that MonoKAN not only enhances interpretability but also improves predictive performance across the majority of benchmarks, outperforming state-of-the-art monotonic MLP approaches.en-GB
dc.language.isoen-GBes_ES
dc.sourceRevista: Neural Networks, Periodo: 1, Volumen: online, Número: , Página inicial: 108278-1, Página final: 108278-16es_ES
dc.subject.otherInstituto de Investigación Tecnológica (IIT)es_ES
dc.titleMonoKAN: Certified monotonic Kolmogorov-Arnold networkes_ES
dc.typeinfo:eu-repo/semantics/articlees_ES
dc.description.versioninfo:eu-repo/semantics/publishedVersiones_ES
dc.rights.holderes_ES
dc.rights.accessRightsinfo:eu-repo/semantics/openAccesses_ES
dc.keywordsArtificial neural network; Kolmogorov-Arnold network; Certified partial monotonic ANN; Explainable artificial intelligencees-ES
dc.keywordsArtificial neural network; Kolmogorov-Arnold network; Certified partial monotonic ANN; Explainable artificial intelligenceen-GB


Ficheros en el ítem

Thumbnail
Thumbnail

Este ítem aparece en la(s) siguiente(s) colección(ones)

  • Artículos
    Artículos de revista, capítulos de libro y contribuciones en congresos publicadas.

Mostrar el registro sencillo del ítem