Seongin Na
Bio-Inspired Collision Avoidance in Swarm Systems via Deep Reinforcement Learning
Na, Seongin; Niu, Hanlin; Lennox, Barry; Arvin, Farshad
Abstract
Autonomous vehicles have been highlighted as a major growth area for future transportation systems and the deployment of large numbers of these vehicles is expected when safety and legal challenges are overcome. To meet the necessary safety standards, effective collision avoidance technologies are required to ensure that the number of accidents are kept to a minimum. As large numbers of autonomous vehicles, operating together on roads, can be regarded as a swarm system, we propose a bio-inspired collision avoidance strategy using virtual pheromones; an approach that has evolved effectively in nature over many millions of years. Previous research using virtual pheromones showed the potential of pheromone-based systems to maneuver a swarm of robots. However, designing an individual controller to maximise the performance of the entire swarm is a major challenge. In this paper, we propose a novel deep reinforcement learning (DRL) based approach that is able to train a controller that introduces collision avoidance behaviour. To accelerate training, we propose a novel sampling strategy called Highlight Experience Replay and integrate it with a Deep Deterministic Policy Gradient algorithm with noise added to the weights and biases of the artificial neural network to improve exploration. To evaluate the performance of the proposed DRL-based controller, we applied it to navigation and collision avoidance tasks in three different traffic scenarios. The experimental results showed that the proposed DRL-based controller outperformed the manually-tuned controller in terms of stability, effectiveness, robustness and ease of tuning process. Furthermore, the proposed Highlight Experience Replay method outperformed than the popular Prioritized Experience Replay sampling strategy by taking 27% of training time average over three stages.
Citation
Na, S., Niu, H., Lennox, B., & Arvin, F. (2022). Bio-Inspired Collision Avoidance in Swarm Systems via Deep Reinforcement Learning. IEEE Transactions on Vehicular Technology, 71(3), 2511-2526. https://doi.org/10.1109/tvt.2022.3145346
Journal Article Type | Article |
---|---|
Acceptance Date | Jan 12, 2022 |
Online Publication Date | Jan 25, 2022 |
Publication Date | Mar 15, 2022 |
Deposit Date | May 27, 2022 |
Journal | IEEE Transactions on Vehicular Technology |
Print ISSN | 0018-9545 |
Electronic ISSN | 1939-9359 |
Publisher | Institute of Electrical and Electronics Engineers |
Volume | 71 |
Issue | 3 |
Pages | 2511-2526 |
DOI | https://doi.org/10.1109/tvt.2022.3145346 |
Public URL | https://durham-repository.worktribe.com/output/1204012 |
You might also like
Editorial: Swarm neuro-robots with the bio-inspired environmental perception.
(2024)
Journal Article
Swarm flocking using optimisation for a self-organised collective motion
(2024)
Journal Article
Organisms as sensors in biohybrid entities as a novel tool for in-field aquatic monitoring
(2023)
Journal Article
Reinforcement learning-based aggregation for robot swarms
(2023)
Journal Article
Downloadable Citations
About Durham Research Online (DRO)
Administrator e-mail: dro.admin@durham.ac.uk
This application uses the following open-source libraries:
SheetJS Community Edition
Apache License Version 2.0 (http://www.apache.org/licenses/)
PDF.js
Apache License Version 2.0 (http://www.apache.org/licenses/)
Font Awesome
SIL OFL 1.1 (http://scripts.sil.org/OFL)
MIT License (http://opensource.org/licenses/mit-license.html)
CC BY 3.0 ( http://creativecommons.org/licenses/by/3.0/)
Powered by Worktribe © 2024
Advanced Search