Task driven autonomous robot navigation with imitation learning and sensor fusion
dc.contributor.advisor | Wu, Shaoen, 1976- | |
dc.contributor.author | Xu, Junhong | |
dc.date.accessioned | 2018-04-30T18:24:29Z | |
dc.date.available | 2018-04-30T18:24:29Z | |
dc.date.issued | 2018-05-05 | |
dc.description.abstract | The ability of interacting with dynamic and complex environments with minimal prior knowledge is a key challenge in mobile robots. The interaction can be in the form of avoiding dynamic obstacles or following human instructions. Such robotic system have various applications such as search-and-rescuer, autonomous delivery, or self-driving. Designing and implementing controllers for such robotic system requires tremendous e↵orts and always prone to error. Rather than programming such controller, it will be more beneficial to allow the robot to learn from others’ and its own experiences. In this thesis, we focus on enabling the mobile robot to perform di↵erent tasks based on visual inputs in indoor environments via imitation learning. Imitation learning is a data-driven approach that uses expert demonstrations to train a policy that performs the demonstrated task. However, it requires heavy supervision from human experts. In addition, it is hard to perform multiple task using the same model. Our first framework focuses on reducing human supervision. It is an extension of Dataset Aggregation (DAgger) method, in which we use the sensor fusion technique to allow the robot to learn a navigation policy in a self-supervised manner thus minimizes human supervision. The second framework learns a multi-task policy using shared information between the related tasks. It performs di↵erent tasks based on human instructions. These tasks are navigating to di↵erent indoor environments or exploring the current one. We performed an extensive collection of experiments for each framework and demonstrates that the proposed frameworks are able to achieve high performance and even surpasses human demonstrator in some scenarios. | en_US |
dc.description.degree | Thesis (M.S.) | en_US |
dc.description.sponsorship | Department of Computer Science | |
dc.description.tableofcontents | Related work -- Multi-sensory semi-supervised imitation learning -- Shared multi-task imitation learning -- Experiments. | |
dc.identifier.uri | http://cardinalscholar.bsu.edu/handle/20.500.14291/201154 | |
dc.subject.lcsh | Intelligent agents (Computer software) | |
dc.subject.lcsh | Mobile robots -- Programming. | |
dc.subject.lcsh | Self-organizing systems. | |
dc.title | Task driven autonomous robot navigation with imitation learning and sensor fusion | en_US |
dc.title.alternative | Title on signature form: Task driven autonomous robot navigation with imitationlearning and sensor fusion | en_US |