Open Access
Open access

Hybrid Policy Learning for Multi-Agent Pathfinding

Skrynnik A., Yakovleva A., Davydov V., Yakovlev K., Panov A.I.
Тип документаJournal Article
Дата публикации2021-12-02
Название журналаIEEE Access
ИздательIEEE
КвартильQ1
ISSN21693536
  • General Materials Science
  • General Engineering
  • General Computer Science
Краткое описание
In this work we study the behavior of groups of autonomous vehicles, which are the part of the Internet of Vehicles systems. One of the challenging modes of operation of such systems is the case when the observability of each vehicle is limited and the global/local communication is unstable, e.g. in the crowded parking lots. In such scenarios the vehicles have to rely on the local observations and exhibit cooperative behavior to ensure safe and efficient trips. This type of problems can be abstracted to the so-called multi-agent pathfinding when a group of agents, confined to a graph, have to find collision-free paths to their goals (ideally, minimizing an objective function e.g. travel time). Widely used algorithms for solving this problem rely on the assumption that a central controller exists for which the full state of the environment (i.e. the agents current positions, their targets, configuration of the static obstacles etc.) is known and they cannot be straightforwardly be adapted to the partially-observable setups. To this end, we suggest a novel approach which is based on the decomposition of the problem into the two sub-tasks: reaching the goal and avoiding the collisions. To accomplish each of this task we utilize reinforcement learning methods such as Deep Monte Carlo Tree Search, Q-mixing networks, and policy gradients methods to design the policies that map the agents’ observations to actions. Next, we introduce the policy-mixing mechanism to end up with a single hybrid policy that allows each agent to exhibit both types of behavior – the individual one (reaching the goal) and the cooperative one (avoiding the collisions with other agents). We conduct an extensive empirical evaluation that shows that the suggested hybrid-policy outperforms standalone stat-of-the-art reinforcement learning methods for this kind of problems by a notable margin.
Метрики
Поделиться
Цитировать
ГОСТ |
Цитировать
1. Skrynnik A. и др. Hybrid Policy Learning for Multi-Agent Pathfinding // IEEE Access. 2021. Т. 9. С. 126034–126047.
RIS |
Цитировать

TY - JOUR

DO - 10.1109/access.2021.3111321

UR - http://dx.doi.org/10.1109/ACCESS.2021.3111321

TI - Hybrid Policy Learning for Multi-Agent Pathfinding

T2 - IEEE Access

AU - Skrynnik, Alexey

AU - Yakovleva, Alexandra

AU - Davydov, Vasilii

AU - Yakovlev, Konstantin

AU - Panov, Aleksandr I.

PY - 2021

PB - Institute of Electrical and Electronics Engineers (IEEE)

SP - 126034-126047

VL - 9

SN - 2169-3536

ER -

BibTex |
Цитировать

@article{Skrynnik_2021,

doi = {10.1109/access.2021.3111321},

url = {https://doi.org/10.1109%2Faccess.2021.3111321},

year = 2021,

publisher = {Institute of Electrical and Electronics Engineers ({IEEE})},

volume = {9},

pages = {126034--126047},

author = {Alexey Skrynnik and Alexandra Yakovleva and Vasilii Davydov and Konstantin Yakovlev and Aleksandr I. Panov},

title = {Hybrid Policy Learning for Multi-Agent Pathfinding},

journal = {{IEEE} Access}

}

MLA
Цитировать
Skrynnik, Alexey et al. “Hybrid Policy Learning for Multi-Agent Pathfinding.” IEEE Access 9 (2021): 126034–126047. Crossref. Web.