• English
    • español
  • español 
    • English
    • español
  • Login
Ver ítem 
  •   DSpace Principal
  • 2.- Investigación
  • Artículos
  • Ver ítem
  •   DSpace Principal
  • 2.- Investigación
  • Artículos
  • Ver ítem
JavaScript is disabled for your browser. Some features of this site may not work without it.

Interpretable Optimization: Why and How We Should Explain Optimization Models

Thumbnail
Ver/
IIT-25-150R (1.960Mb)
IIT-25-150R_preview (3.641Kb)
Fecha
2025-05-02
Autor
Lumbreras Sancho, Sara
Ciller Cutillas, Pedro
Estado
info:eu-repo/semantics/publishedVersion
Metadatos
Mostrar el registro completo del ítem
Mostrar METS del ítem
Ver registro en CKH

Refworks Export

Resumen
 
 
Interpretability is widely recognized as essential in machine learning, yet optimization models remain largely opaque, limiting their adoption in high-stakes decision-making. While optimization provides mathematically rigorous solutions, the reasoning behind these solutions is often difficult to extract and communicate. This lack of transparency is particularly problematic in fields such as energy planning, healthcare, and resource allocation, where decision-makers require not only optimal solutions but also a clear understanding of trade-offs, constraints, and alternative options. To address these challenges, we propose a framework for interpretable optimization built on three key pillars. First, simplification and surrogate modeling reduce problem complexity while preserving decision-relevant structures, allowing stakeholders to engage with more intuitive representations of optimization models. Second, near-optimal solution analysis identifies alternative solutions that perform comparably to the optimal one, offering flexibility and robustness in decision-making while uncovering hidden trade-offs. Last, rationale generation ensures that solutions are explainable and actionable by providing insights into the relationships among variables, constraints, and objectives. By integrating these principles, optimization can move beyond black-box decision-making toward greater transparency, accountability, and usability. Enhancing interpretability strengthens both efficiency and ethical responsibility, enabling decision-makers to trust, validate, and implement optimization-driven insights with confidence.
 
URI
https:doi.org10.3390app15105732
http://hdl.handle.net/11531/101289
Interpretable Optimization: Why and How We Should Explain Optimization Models
Tipo de Actividad
Artículos en revistas
ISSN
2076-3417
Materias/ categorías / ODS
Instituto de Investigación Tecnológica (IIT)
Palabras Clave

interpretable optimization; optimization; explainability; global sensitivity analysis; fitness landscape; near-optimal solutions; surrogate modeling; problem simplification; presolve; sensitivity analysis; modeling all alternatives; modeling to generate alternatives; ethics; rationale generation
Colecciones
  • Artículos

Repositorio de la Universidad Pontificia Comillas copyright © 2015  Desarrollado con DSpace Software
Contacto | Sugerencias
 

 

Búsqueda semántica (CKH Explorer)


Listar

Todo DSpaceComunidades & ColeccionesPor fecha de publicaciónAutoresTítulosMateriasPor DirectorPor tipoEsta colecciónPor fecha de publicaciónAutoresTítulosMateriasPor DirectorPor tipo

Mi cuenta

AccederRegistro

Repositorio de la Universidad Pontificia Comillas copyright © 2015  Desarrollado con DSpace Software
Contacto | Sugerencias