• English
    • español
  • English 
    • English
    • español
  • Login
View Item 
  •   Home
  • 2.- Investigación
  • Artículos
  • View Item
  •   Home
  • 2.- Investigación
  • Artículos
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

Interpretable Optimization: Why and How We Should Explain Optimization Models

Thumbnail
View/Open
IIT-25-150R (1.960Mb)
IIT-25-150R_preview (3.641Kb)
Date
2025-05-02
Author
Lumbreras Sancho, Sara
Ciller Cutillas, Pedro
Estado
info:eu-repo/semantics/publishedVersion
Metadata
Show full item record
Mostrar METS del ítem
Ver registro en CKH

Refworks Export

Abstract
 
 
Interpretability is widely recognized as essential in machine learning, yet optimization models remain largely opaque, limiting their adoption in high-stakes decision-making. While optimization provides mathematically rigorous solutions, the reasoning behind these solutions is often difficult to extract and communicate. This lack of transparency is particularly problematic in fields such as energy planning, healthcare, and resource allocation, where decision-makers require not only optimal solutions but also a clear understanding of trade-offs, constraints, and alternative options. To address these challenges, we propose a framework for interpretable optimization built on three key pillars. First, simplification and surrogate modeling reduce problem complexity while preserving decision-relevant structures, allowing stakeholders to engage with more intuitive representations of optimization models. Second, near-optimal solution analysis identifies alternative solutions that perform comparably to the optimal one, offering flexibility and robustness in decision-making while uncovering hidden trade-offs. Last, rationale generation ensures that solutions are explainable and actionable by providing insights into the relationships among variables, constraints, and objectives. By integrating these principles, optimization can move beyond black-box decision-making toward greater transparency, accountability, and usability. Enhancing interpretability strengthens both efficiency and ethical responsibility, enabling decision-makers to trust, validate, and implement optimization-driven insights with confidence.
 
URI
https:doi.org10.3390app15105732
http://hdl.handle.net/11531/101289
Interpretable Optimization: Why and How We Should Explain Optimization Models
Tipo de Actividad
Artículos en revistas
ISSN
2076-3417
Materias/ categorías / ODS
Instituto de Investigación Tecnológica (IIT)
Palabras Clave

interpretable optimization; optimization; explainability; global sensitivity analysis; fitness landscape; near-optimal solutions; surrogate modeling; problem simplification; presolve; sensitivity analysis; modeling all alternatives; modeling to generate alternatives; ethics; rationale generation
Collections
  • Artículos

Repositorio de la Universidad Pontificia Comillas copyright © 2015  Desarrollado con DSpace Software
Contact Us | Send Feedback
 

 

Búsqueda semántica (CKH Explorer)


Browse

All of DSpaceCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsxmlui.ArtifactBrowser.Navigation.browse_advisorxmlui.ArtifactBrowser.Navigation.browse_typeThis CollectionBy Issue DateAuthorsTitlesSubjectsxmlui.ArtifactBrowser.Navigation.browse_advisorxmlui.ArtifactBrowser.Navigation.browse_type

My Account

LoginRegister

Repositorio de la Universidad Pontificia Comillas copyright © 2015  Desarrollado con DSpace Software
Contact Us | Send Feedback