CMU logo
Expand Menu
Close Menu

Master of Science in Robotics Thesis Talk

Speaker
ZHE HUANG
Masters Student
Robotics Institute
Carnegie Mellon University

When
-

Where
In Person and Virtual - ET

Description

Due to the complex and safety-critical nature of autonomous driving, recent works typically test their ideas on simulators designed for the very purpose of advancing self-driving research. Despite the convenience of modeling autonomous driving as a trajectory optimization problem, few of these methods resort to online reinforcement learning (RL) to address challenging driving scenarios. This is mainly because classic online RL algorithms are originally designed for toy problems such as Atari games, which are solvable within hours. In contrast, it may take weeks or months to get satisfactory results on self-driving tasks using these online RL methods as a consequence of the time-consuming simulation and the difficulty of the problem itself. Thus, a promising online RL pipeline for autonomous driving should be efficiency driven. In this thesis, we investigate the inefficiency of directly applying generic online RL algorithms to self-driving pipelines. We propose two distributed multi-agent RL algorithms, Multi-Parallel SAC (off-policy) and Multi-Parallel PPO (on-policy), both of which are highly scalable by running asynchronously. Our methods are dedicated to accelerating the online RL training on CARLA simulator by establishing both inter-process and intra-process parallelization. We demonstrate that our multi-agent methods achieve state-of-the-art performances on various CARLA self-driving tasks in much shorter and reasonable time. Thesis Committee: Prof. Jeff Schneider (Advisor) Prof. David Held Adam Villaflor

In Person and Zoom Participation. See announcement.