Article,

Robust path following on rivers using bootstrapped reinforcement learning

, and .
Ocean engineering, (Apr 15, 2024)Publisher Copyright: © 2024 The Authors.
DOI: 10.1016/j.oceaneng.2024.117207

Abstract

This paper develops a Deep Reinforcement Learning (DRL)-based approach for the navigation and control of autonomous surface vessels (ASVs) on inland waterways, where spatial constraints and environmental challenges such as high flow velocities and shallow banks require precise maneuvering. By implementing a state-of-the-art bootstrapped deep Q-learning (DQN) algorithm alongside a novel, flexible training environment generator, we developed a robust and accurate rudder control system capable of adapting to the dynamic conditions of inland waterways. The effectiveness of our approach is validated through comparisons with a vessel-specific Proportional-Integral-Derivative (PID) and standard DQN controller using real-world data from the lower and middle Rhine. It was found that the DRL algorithm demonstrates superior adaptability and generalizability across previously unseen scenarios and achieves high navigational accuracy. Our findings highlight the limitations of traditional control methods like PID in complex river environments, as well as the importance of training in diverse and realistic environments.

Tags

Users

  • @scadsfct

Comments and Reviews