Mostrar el registro sencillo del ítem

dc.contributor.authorGüitta López, Lucíaes-ES
dc.contributor.authorBoal Martín-Larrauri, Jaimees-ES
dc.contributor.authorLópez López, Álvaro Jesúses-ES
dc.date.accessioned2024-02-23T13:31:50Z
dc.date.available2024-02-23T13:31:50Z
dc.date.issued2023-06-01es_ES
dc.identifier.issn0924-669Xes_ES
dc.identifier.urihttps:doi.org10.1007s10489-022-04227-3es_ES
dc.descriptionArtículos en revistases_ES
dc.description.abstractes-ES
dc.description.abstractThe industrial application of Deep Reinforcement Learning (DRL) is frequently slowed down due to an inability to generate the experience required to train the models. Collecting data often involves considerable time and financial outlays that can make it unaffordable. Fortunately, devices like robots can be trained with synthetic experience through virtual environments. With this approach, the problems of sample efficiency with artificial agents are mitigated, but another issue arises: the need to efficiently transfer the synthetic experience into the real world (sim-to-real).This paper analyzes the robustness of a state-of-the-art sim-to-real technique known as Progressive Neural Networks (PNNs) and studies how adding diversity to the synthetic experience can complement it.To better understand the drivers that lead to a lack of robustness, the robotic agent is still tested in a virtual environment to ensure total control on the divergence between the simulated and real models.The results show that a PNN-like agent exhibits a substantial decrease in its robustness at the beginning of the real training phase. Randomizing specific variables during simulation-based training significantly mitigates this issue. The average increase in the model’s accuracy is around 25 when diversity is introduced in the training process. This improvement can translate into a decrease in the number of real experiences required for the same final robust performance. Notwithstanding, adding real experience to agents should still be beneficial, regardless of the quality of the virtual experience fed to the agent. The source code is available at: https:gitlab.comcomillas-cicsim-to-realpnn-dr.giten-GB
dc.format.mimetypeapplication/pdfes_ES
dc.language.isoen-GBes_ES
dc.rightses_ES
dc.rights.uries_ES
dc.sourceRevista: Applied Intelligence, Periodo: 1, Volumen: online, Número: 12, Página inicial: 14903, Página final: 14917es_ES
dc.subject.otherInstituto de Investigación Tecnológica (IIT)es_ES
dc.titleLearning more with the same effort: how randomization improves the robustness of a robotic deep reinforcement learning agentes_ES
dc.typeinfo:eu-repo/semantics/articlees_ES
dc.description.versioninfo:eu-repo/semantics/publishedVersiones_ES
dc.rights.accessRightsinfo:eu-repo/semantics/restrictedAccesses_ES
dc.keywordses-ES
dc.keywordsReinforcement Learning, Deep Learning, Sim-To-Real, Domain Randomization, Sample Efficiency, Roboticsen-GB


Ficheros en el ítem

Thumbnail

Este ítem aparece en la(s) siguiente(s) colección(ones)

  • Artículos
    Artículos de revista, capítulos de libro y contribuciones en congresos publicadas.

Mostrar el registro sencillo del ítem