Por favor, use este identificador para citar o enlazar este ítem:
http://hdl.handle.net/11531/109669| Título : | Evaluating the Perception, Understanding, and Forgetting of Progressive Neural Networks: A Quantitative and Qualitative Analysis |
| Autor : | Güitta López, Lucía Boal Martín-Larrauri, Jaime López López, Álvaro Jesús |
| Fecha de publicación : | 1-abr-2026 |
| Resumen : | The use of virtual environments to collect the experience required by deep reinforcement learning models is accelerating the deployment of these algorithms in industrial environments. However, once the experience-gathering problem is solved, it is necessary to address how to efficiently transfer the knowledge from the virtual scenario to reality. This paper focuses on examining Progressive Neural Networks (PNNs) as a promising transfer learning technique. The analyses carried out range from studying the capabilities and limits of the layers responsible for learning the state representation from a pixel space, which could arguably be the convolutional blocks, to the forgetting agents suffer when learning a new task. Introducing controlled visual changes in the environment scene can lead to a performance degradation of 50.3% in the worst-case scenario. These visual discrepancies significantly impact the agent’s learning time and accuracy when using a PNN architecture. Regarding the PNN forgetting assessment, partial forgetting occurs in two of the three environments analyzed, those where the agent masters its new task. This could be due to a balance between the relevance of the new features learned and the ones inherited from the teacher agent. The use of virtual environments to collect the experience required by deep reinforcement learning models is accelerating the deployment of these algorithms in industrial environments. However, once the experience-gathering problem is solved, it is necessary to address how to efficiently transfer the knowledge from the virtual scenario to reality. This paper focuses on examining Progressive Neural Networks (PNNs) as a promising transfer learning technique. The analyses carried out range from studying the capabilities and limits of the layers responsible for learning the state representation from a pixel space, which could arguably be the convolutional blocks, to the forgetting agents suffer when learning a new task. Introducing controlled visual changes in the environment scene can lead to a performance degradation of 50.3% in the worst-case scenario. These visual discrepancies significantly impact the agent’s learning time and accuracy when using a PNN architecture. Regarding the PNN forgetting assessment, partial forgetting occurs in two of the three environments analyzed, those where the agent masters its new task. This could be due to a balance between the relevance of the new features learned and the ones inherited from the teacher agent. |
| Descripción : | Artículos en revistas |
| URI : | https://doi.org/10.3390/ai7040120 http://hdl.handle.net/11531/109669 |
| ISSN : | 2673-2688 |
| Aparece en las colecciones: | Artículos |
Ficheros en este ítem:
| Fichero | Descripción | Tamaño | Formato | |
|---|---|---|---|---|
| IIT-26-104R.pdf | 3,66 MB | Adobe PDF | Visualizar/Abrir | |
| IIT-26-104R_preview.pdf | 3,16 kB | Adobe PDF | Visualizar/Abrir |
Los ítems de DSpace están protegidos por copyright, con todos los derechos reservados, a menos que se indique lo contrario.