Lecture Notes in Networks and Systems, volume 330 LNNS, pages 35-43
Application of Reinforcement Learning in Open Space Planner for Apollo Auto
Ivanov Dmitriy
1
,
Panov Aleksandr I
1, 2
Publication type: Book Chapter
Publication date: 2021-09-16
Quartile SCImago
Q4
Quartile WOS
—
Impact factor: —
ISSN: 23673370, 23673389
Abstract
Local planner makes a trajectory physically executable for an agent. Open Space Planner of Apollo framework based on nonlinear optimization methods smooths the trajectory received from a global planner. Such dependency on a global planner forces an agent to relaunch both planners when local changes occur (e.g., when an environment has dynamic obstacles), what can waste too much time. In this article, we consider a different approach which is based on reinforcement learning. This method allows agent generate a trajectory using information about environment (the current and goal state, lidar sensors, etc.). Experiments conducted on the simplified environment show that such algorithm can be implemented as the local planner in Apollo infrastructure.
Citations by journals
1
|
|
Lecture Notes in Computer Science
|
Lecture Notes in Computer Science
1 publication, 50%
|
IEEE Robotics and Automation Letters
|
IEEE Robotics and Automation Letters
1 publication, 50%
|
1
|
Citations by publishers
1
|
|
Springer Nature
|
Springer Nature
1 publication, 50%
|
IEEE
|
IEEE
1 publication, 50%
|
1
|
- We do not take into account publications that without a DOI.
- Statistics recalculated only for publications connected to researchers, organizations and labs registered on the platform.
- Statistics recalculated weekly.
{"yearsCitations":{"type":"bar","data":{"show":true,"labels":[2021,2022,2023],"ids":[0,0,0],"codes":[0,0,0],"imageUrls":["","",""],"datasets":[{"label":"Citations number","data":[1,0,1],"backgroundColor":["#3B82F6","#3B82F6","#3B82F6"],"percentage":["50",0,"50"],"barThickness":null}]},"options":{"indexAxis":"x","maintainAspectRatio":true,"scales":{"y":{"ticks":{"precision":0,"autoSkip":false,"font":{"family":"Montserrat"},"color":"#000000"}},"x":{"ticks":{"stepSize":1,"precision":0,"font":{"family":"Montserrat"},"color":"#000000"}}},"plugins":{"legend":{"position":"top","labels":{"font":{"family":"Montserrat"},"color":"#000000"}},"title":{"display":true,"text":"Citations per year","font":{"size":24,"family":"Montserrat","weight":600},"color":"#000000"}}}},"journals":{"type":"bar","data":{"show":true,"labels":["Lecture Notes in Computer Science","IEEE Robotics and Automation Letters"],"ids":[1022,12376],"codes":[0,0],"imageUrls":["\/storage\/images\/resized\/voXLqlsvTwv5p3iMQ8Dhs95nqB4AXOG7Taj7G4ra_medium.webp","\/storage\/images\/resized\/6scCJegesojp2jubwY3uKCzTAmgsaH2GIFlg6Hfk_medium.webp"],"datasets":[{"label":"","data":[1,1],"backgroundColor":["#3B82F6","#3B82F6"],"percentage":[50,50],"barThickness":13}]},"options":{"indexAxis":"y","maintainAspectRatio":false,"scales":{"y":{"ticks":{"precision":0,"autoSkip":false,"font":{"family":"Montserrat"},"color":"#000000"}},"x":{"ticks":{"stepSize":null,"precision":0,"font":{"family":"Montserrat"},"color":"#000000"}}},"plugins":{"legend":{"position":"top","labels":{"font":{"family":"Montserrat"},"color":"#000000"}},"title":{"display":true,"text":"Journals","font":{"size":24,"family":"Montserrat","weight":600},"color":"#000000"}}}},"publishers":{"type":"bar","data":{"show":true,"labels":["Springer Nature","IEEE"],"ids":[8,6953],"codes":[0,0],"imageUrls":["\/storage\/images\/resized\/voXLqlsvTwv5p3iMQ8Dhs95nqB4AXOG7Taj7G4ra_medium.webp","\/storage\/images\/resized\/6scCJegesojp2jubwY3uKCzTAmgsaH2GIFlg6Hfk_medium.webp"],"datasets":[{"label":"","data":[1,1],"backgroundColor":["#3B82F6","#3B82F6"],"percentage":[50,50],"barThickness":13}]},"options":{"indexAxis":"y","maintainAspectRatio":false,"scales":{"y":{"ticks":{"precision":0,"autoSkip":false,"font":{"family":"Montserrat"},"color":"#000000"}},"x":{"ticks":{"stepSize":null,"precision":0,"font":{"family":"Montserrat"},"color":"#000000"}}},"plugins":{"legend":{"position":"top","labels":{"font":{"family":"Montserrat"},"color":"#000000"}},"title":{"display":true,"text":"Publishers","font":{"size":24,"family":"Montserrat","weight":600},"color":"#000000"}}}}}
Metrics
Cite this
GOST |
RIS |
BibTex
Cite this
GOST
Copy
Ivanov D., Panov A. I. Application of Reinforcement Learning in Open Space Planner for Apollo Auto // Lecture Notes in Networks and Systems. 2021. Vol. 330 LNNS. pp. 35-43.
GOST all authors (up to 50)
Copy
Ivanov D., Panov A. I. Application of Reinforcement Learning in Open Space Planner for Apollo Auto // Lecture Notes in Networks and Systems. 2021. Vol. 330 LNNS. pp. 35-43.
Cite this
RIS
Copy
TY - GENERIC
DO - 10.1007/978-3-030-87178-9_4
UR - https://doi.org/10.1007%2F978-3-030-87178-9_4
TI - Application of Reinforcement Learning in Open Space Planner for Apollo Auto
T2 - Lecture Notes in Networks and Systems
AU - Ivanov, Dmitriy
AU - Panov, Aleksandr I
PY - 2021
DA - 2021/09/16 00:00:00
PB - Springer Nature
SP - 35-43
VL - 330 LNNS
SN - 2367-3370
SN - 2367-3389
ER -
Cite this
BibTex
Copy
@incollection{2021_Ivanov,
author = {Dmitriy Ivanov and Aleksandr I Panov},
title = {Application of Reinforcement Learning in Open Space Planner for Apollo Auto},
publisher = {Springer Nature},
year = {2021},
volume = {330 LNNS},
pages = {35--43},
month = {sep}
}
Profiles