Mostrar el registro sencillo del ítem

dc.contributor.authorLumbreras Sancho, Saraes-ES
dc.contributor.authorCiller Cutillas, Pedroes-ES
dc.date.accessioned2025-07-16T12:24:16Z
dc.date.available2025-07-16T12:24:16Z
dc.date.issued2025-05-02es_ES
dc.identifier.issn2076-3417es_ES
dc.identifier.urihttps:doi.org10.3390app15105732es_ES
dc.identifier.urihttp://hdl.handle.net/11531/101289
dc.descriptionArtículos en revistases_ES
dc.description.abstractes-ES
dc.description.abstractInterpretability is widely recognized as essential in machine learning, yet optimization models remain largely opaque, limiting their adoption in high-stakes decision-making. While optimization provides mathematically rigorous solutions, the reasoning behind these solutions is often difficult to extract and communicate. This lack of transparency is particularly problematic in fields such as energy planning, healthcare, and resource allocation, where decision-makers require not only optimal solutions but also a clear understanding of trade-offs, constraints, and alternative options. To address these challenges, we propose a framework for interpretable optimization built on three key pillars. First, simplification and surrogate modeling reduce problem complexity while preserving decision-relevant structures, allowing stakeholders to engage with more intuitive representations of optimization models. Second, near-optimal solution analysis identifies alternative solutions that perform comparably to the optimal one, offering flexibility and robustness in decision-making while uncovering hidden trade-offs. Last, rationale generation ensures that solutions are explainable and actionable by providing insights into the relationships among variables, constraints, and objectives. By integrating these principles, optimization can move beyond black-box decision-making toward greater transparency, accountability, and usability. Enhancing interpretability strengthens both efficiency and ethical responsibility, enabling decision-makers to trust, validate, and implement optimization-driven insights with confidence.en-GB
dc.language.isoen-GBes_ES
dc.sourceRevista: Applied Sciences, Periodo: 1, Volumen: online, Número: 10, Página inicial: 5732-1, Página final: 5732-28es_ES
dc.subject.otherInstituto de Investigación Tecnológica (IIT)es_ES
dc.titleInterpretable Optimization: Why and How We Should Explain Optimization Modelses_ES
dc.typeinfo:eu-repo/semantics/articlees_ES
dc.description.versioninfo:eu-repo/semantics/publishedVersiones_ES
dc.rights.holderes_ES
dc.rights.accessRightsinfo:eu-repo/semantics/openAccesses_ES
dc.keywordses-ES
dc.keywordsinterpretable optimization; optimization; explainability; global sensitivity analysis; fitness landscape; near-optimal solutions; surrogate modeling; problem simplification; presolve; sensitivity analysis; modeling all alternatives; modeling to generate alternatives; ethics; rationale generationen-GB


Ficheros en el ítem

Thumbnail
Thumbnail

Este ítem aparece en la(s) siguiente(s) colección(ones)

  • Artículos
    Artículos de revista, capítulos de libro y contribuciones en congresos publicadas.

Mostrar el registro sencillo del ítem