Mostrar el registro sencillo del ítem
PRIMER: Perception-Aware Robust Learning-based Multiagent Trajectory Planner
| dc.contributor.author | Tordesillas Torres, Jesús | es-ES |
| dc.contributor.author | How, Jonathan P. | es-ES |
| dc.date.accessioned | 2025-10-16T12:40:51Z | |
| dc.date.available | 2025-10-16T12:40:51Z | |
| dc.date.issued | 2025-09-02 | es_ES |
| dc.identifier.uri | http://hdl.handle.net/11531/106422 | |
| dc.description | Capítulos en libros | es_ES |
| dc.description.abstract | In decentralized multiagent trajectory planners, agents need to communicate and exchange their positions to generate collision-free trajectories. However, due to localization errorsuncertainties, trajectory deconfliction can fail even if trajectories are perfectly shared between agents. To address this issue, we first present PARM and PARM*, perception-aware, decentralized, asynchronous multiagent trajectory planners that enable a team of agents to navigate uncertain environments while deconflicting trajectories and avoiding obstacles using perception information. PARM* differs from PARM as it is less conservative, using more computation to find closer-to-optimal solutions. While these methods achieve state-of-the-art performance, they suffer from high computational costs as they need to solve large optimization problems onboard, making it difficult for agents to replan at high rates. To overcome this challenge, we present our second key contribution, PRIMER, a learning-based planner trained with imitation learning (IL) using PARM* as the expert demonstrator. PRIMER leverages the low computational requirements at deployment of neural networks and achieves a computation speed up to 5614 times faster than optimization-based approaches. | es-ES |
| dc.description.abstract | In decentralized multiagent trajectory planners, agents need to communicate and exchange their positions to generate collision-free trajectories. However, due to localization errorsuncertainties, trajectory deconfliction can fail even if trajectories are perfectly shared between agents. To address this issue, we first present PARM and PARM*, perception-aware, decentralized, asynchronous multiagent trajectory planners that enable a team of agents to navigate uncertain environments while deconflicting trajectories and avoiding obstacles using perception information. PARM* differs from PARM as it is less conservative, using more computation to find closer-to-optimal solutions. While these methods achieve state-of-the-art performance, they suffer from high computational costs as they need to solve large optimization problems onboard, making it difficult for agents to replan at high rates. To overcome this challenge, we present our second key contribution, PRIMER, a learning-based planner trained with imitation learning (IL) using PARM* as the expert demonstrator. PRIMER leverages the low computational requirements at deployment of neural networks and achieves a computation speed up to 5614 times faster than optimization-based approaches. | en-GB |
| dc.format.mimetype | application/pdf | es_ES |
| dc.language.iso | en-GB | es_ES |
| dc.publisher | IEEE Robotics and Automation Society; Institute of Electrical and Electronics Engineers (Atlanta, Estados Unidos de América) | es_ES |
| dc.rights | es_ES | |
| dc.rights.uri | es_ES | |
| dc.source | Libro: IEEE International Conference on Robotics and Automation - ICRA 2025, Página inicial: 14154-14160, Página final: | es_ES |
| dc.subject.other | Instituto de Investigación Tecnológica (IIT) | es_ES |
| dc.title | PRIMER: Perception-Aware Robust Learning-based Multiagent Trajectory Planner | es_ES |
| dc.type | info:eu-repo/semantics/bookPart | es_ES |
| dc.description.version | info:eu-repo/semantics/publishedVersion | es_ES |
| dc.rights.accessRights | info:eu-repo/semantics/restrictedAccess | es_ES |
| dc.keywords | es-ES | |
| dc.keywords | en-GB |
Ficheros en el ítem
Este ítem aparece en la(s) siguiente(s) colección(ones)
-
Artículos
Artículos de revista, capítulos de libro y contribuciones en congresos publicadas.
