• English
    • español
  • English 
    • English
    • español
  • Login
View Item 
  •   Home
  • 2.- Investigación
  • Artículos
  • View Item
  •   Home
  • 2.- Investigación
  • Artículos
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

Multi-Objective Bayesian Optimization of Deep Reinforcement Learning for Environmental, Social, and Governance (ESG) Financial Portfolio Management

Thumbnail
View/Open
IIT-25-201R_preview (4.006Kb)
Date
2025-06-01
Author
Garrido Merchán, Eduardo César
Coronado Vaca, María
Estado
info:eu-repo/semantics/publishedVersion
Metadata
Show full item record
Mostrar METS del ítem
Ver registro en CKH

Refworks Export

Abstract
 
 
Financial portfolio management focuses on the maximization of several objectives in a trading period related not only to the risk and performance of the portfolio but also to other objectives such as the environment, social, and governance (ESG) score of the portfolio. Regrettably, classic methods such as the Markowitz model do not take into account ESG scores but only the risk and performance of the portfolio. Moreover, the assumptions made by this model about the financial returns make it unfeasible to be applicable to markets with high volatility such as the technological sector. This paper investigates the application of deep reinforcement learning (DRL) for ESG financial portfolio management. DRL agents circumvent the issue of classic models in the sense that they do not make assumptions like the financial returns being normally distributed and are able to deal with any information like the ESG score if they are configured to gain a reward that makes an objective better. However, the performance of DRL agents has high variability, and it is very sensible to the value of their hyperparameters. Bayesian optimization is a class of methods that are suited to the optimization of black-box functions, that is, functions whose analytical expression is unknown and are noisy and expensive to evaluate. The hyperparameter tuning problem of DRL algorithms perfectly suits this scenario. As training an agent just for one objective is a very expensive period, requiring millions of timesteps, instead of optimizing an objective being a mixture of a risk-performance metric and an ESG metric, we choose to separate the objective and solve the multi-objective scenario to obtain an optimal Pareto set of portfolios representing the best trade-off between the Sharpe ratio and the ESG mean score of the portfolio and leaving to the investor the choice of the final portfolio. We conducted our experiments using environments encoded within the OpenAI Gym, adapted from the FinRL platform. The experiments are carried out in the Dow Jones Industrial Average (DJIA) and the NASDAQ markets in terms of the Sharpe ratio achieved by the agent and the mean ESG score of the portfolio. We compare the performance of the obtained Pareto sets in hypervolume terms illustrating how portfolios are the best trade-off between the Sharpe ratio and mean ESG score. Also, we show the usefulness of our proposed methodology by comparing the obtained hypervolume with one achieved by a random search methodology on the DRL hyperparameter space.
 
URI
https:doi.org10.1002isaf.70008
Multi-Objective Bayesian Optimization of Deep Reinforcement Learning for Environmental, Social, and Governance (ESG) Financial Portfolio Management
Tipo de Actividad
Artículos en revistas
ISSN
1550-1949
Materias/ categorías / ODS
Instituto de Investigación Tecnológica (IIT)
Palabras Clave


Collections
  • Artículos

Repositorio de la Universidad Pontificia Comillas copyright © 2015  Desarrollado con DSpace Software
Contact Us | Send Feedback
 

 

Búsqueda semántica (CKH Explorer)


Browse

All of DSpaceCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsxmlui.ArtifactBrowser.Navigation.browse_advisorxmlui.ArtifactBrowser.Navigation.browse_typeThis CollectionBy Issue DateAuthorsTitlesSubjectsxmlui.ArtifactBrowser.Navigation.browse_advisorxmlui.ArtifactBrowser.Navigation.browse_type

My Account

LoginRegister

Repositorio de la Universidad Pontificia Comillas copyright © 2015  Desarrollado con DSpace Software
Contact Us | Send Feedback