• English
    • español
  • English 
    • English
    • español
  • Login
View Item 
  •   Home
  • 2.- Investigación
  • Documentos de Trabajo
  • View Item
  •   Home
  • 2.- Investigación
  • Documentos de Trabajo
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

Explaining the solutions of the unit commitment with interpretable machine learning

Thumbnail
View/Open
IIT-23-383WP.pdf (1.252Mb)
Author
Lumbreras Sancho, Sara
Tejada Arango, Diego Alejandro
Elechiguerra Batlle, Daniel
Estado
info:eu-repo/semantics/draft
Metadata
Show full item record
Mostrar METS del ítem
Ver registro en CKH

Refworks Export

Abstract
 
 
The energy transition needs mathematical models to address the complexity of shifting towards sustainable energy sources. In addition to providing accurate solutions, these models must be explainable and available for discussion among stakeholders to facilitate informed decision-making and ensure a successful transition. This paper contributes to the explainability of power systems models by applying interpretable machine learning techniques to improve understanding of the solutions to the unit commitment problem. It applies them to a case study based on the IEEE 118N system. The developed methodology aims at describing the optimal commitment solutions as a function of the conditions of the system in a compact manner that is understandable by a human being. This type of information takes the form of  'which plants are needed under which conditions' and is routinely learned by experience by system operators and other agents participating in the system. This experiential knowledge is realized in an approximate form that is simple enough to help make or justify decisions. By applying interpretable machine learning techniques, our methodology can automatically extract what was previously only available through human experience and reflection. Our approach involves model trees and node clustering to find a concise description of the different situations where the system can be found. Our results show that the methodology can explain these modes of operation for the 118N system in a sufficiently simple manner to be understood by a human unfamiliar with the system. This shows that interpretable machine learning can provide valuable insights into real solutions of the unit commitment and help improve decision-making in this area.
 
URI
http://hdl.handle.net/11531/87615
Explaining the solutions of the unit commitment with interpretable machine learning
Palabras Clave


Collections
  • Documentos de Trabajo

Repositorio de la Universidad Pontificia Comillas copyright © 2015  Desarrollado con DSpace Software
Contact Us | Send Feedback
 

 

Búsqueda semántica (CKH Explorer)


Browse

All of DSpaceCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsxmlui.ArtifactBrowser.Navigation.browse_advisorxmlui.ArtifactBrowser.Navigation.browse_typeThis CollectionBy Issue DateAuthorsTitlesSubjectsxmlui.ArtifactBrowser.Navigation.browse_advisorxmlui.ArtifactBrowser.Navigation.browse_type

My Account

LoginRegister

Repositorio de la Universidad Pontificia Comillas copyright © 2015  Desarrollado con DSpace Software
Contact Us | Send Feedback