Task driven autonomous robot navigation with imitation learning and sensor fusion
The ability of interacting with dynamic and complex environments with minimal prior knowledge is a key challenge in mobile robots. The interaction can be in the form of avoiding dynamic obstacles or following human instructions. Such robotic system have various applications such as search-and-rescuer, autonomous delivery, or self-driving. Designing and implementing controllers for such robotic system requires tremendous e↵orts and always prone to error. Rather than programming such controller, it will be more beneficial to allow the robot to learn from others’ and its own experiences. In this thesis, we focus on enabling the mobile robot to perform di↵erent tasks based on visual inputs in indoor environments via imitation learning. Imitation learning is a data-driven approach that uses expert demonstrations to train a policy that performs the demonstrated task. However, it requires heavy supervision from human experts. In addition, it is hard to perform multiple task using the same model. Our first framework focuses on reducing human supervision. It is an extension of Dataset Aggregation (DAgger) method, in which we use the sensor fusion technique to allow the robot to learn a navigation policy in a self-supervised manner thus minimizes human supervision. The second framework learns a multi-task policy using shared information between the related tasks. It performs di↵erent tasks based on human instructions. These tasks are navigating to di↵erent indoor environments or exploring the current one. We performed an extensive collection of experiments for each framework and demonstrates that the proposed frameworks are able to achieve high performance and even surpasses human demonstrator in some scenarios.