Abstract
This paper develops a Deep Reinforcement Learning (DRL)-based approach for the navigation and control of autonomous surface vessels (ASVs) on inland waterways, where spatial constraints and environmental challenges such as high flow velocities and shallow banks require precise maneuvering. By implementing a state-of-the-art bootstrapped deep Q-learning (DQN) algorithm alongside a novel, flexible training environment generator, we developed a robust and accurate rudder control system capable of adapting to the dynamic conditions of inland waterways. The effectiveness of our approach is validated through comparisons with a vessel-specific Proportional-Integral-Derivative (PID) and standard DQN controller using real-world data from the lower and middle Rhine. It was found that the DRL algorithm demonstrates superior adaptability and generalizability across previously unseen scenarios and achieves high navigational accuracy. Our findings highlight the limitations of traditional control methods like PID in complex river environments, as well as the importance of training in diverse and realistic environments.
Users
Please
log in to take part in the discussion (add own reviews or comments).