Nonparametric Inverse Reinforcement Learning and Approximate Optimal Control with Temporal Logic Tasks
ÖffentlichHerunterladbarer Inhalt
open in viewerThis thesis focuses on two key problems in reinforcement learning: How to design reward functions to obtain intended behaviors in autonomous systems using the learning-based control? Given complex mission specification, how to shape the reward function to achieve fast convergence and reduce sample complexity while learning the optimal policy? To answer these questions, the first part of this thesis investigates inverse reinforcement learning (IRL) method with a purpose of learning a reward function from expert demonstrations. However, existing algorithms often assume that the expert demonstrations are generated by the same reward function. Such an assumption may be invalid as one may need to aggregate data from multiple experts to obtain a sufficient set of demonstrations. In the first and the major part of the thesis, we develop a novel method, called Non-parametric Behavior Clustering IRL. This algorithm allows one to simultaneously cluster behaviors while learning their reward functions from demonstrations that are generated from more than one expert/behavior. Our approach is built upon the expectation-maximization formulation and non-parametric clustering in the IRL setting. We apply the algorithm to learn, from driving demonstrations, multiple driver behaviors (e.g., aggressive vs. evasive driving behaviors). In the second task, we study whether reinforcement learning can be used to generate complex behaviors specified in formal logic — Linear Temporal Logic (LTL). Such LTL tasks may specify temporally extended goals, safety, surveillance, and reactive behaviors in a dynamic environment. We introduce reward shaping under LTL constraints to improve the rate of convergence in learning the optimal and probably correct policies. Our approach exploits the relation between reward shaping and actor-critic methods for speeding up the convergence and, as a consequence, reducing training samples. We integrate compositional reasoning in formal methods with actor-critic reinforcement learning algorithms to initialize a heuristic value function for reward shaping. This initialization can direct the agent towards efficient planning subject to more complex behavior specifications in LTL. The investigation takes the initial step to integrate machine learning with formal methods and contributes to building highly autonomous and self-adaptive robots under complex missions.
- Creator
- Mitwirkende
- Degree
- Unit
- Publisher
- Language
- English
- Identifier
- etd-083017-144531
- Stichwort
- Advisor
- Committee
- Defense date
- Year
- 2017
- Date created
- 2017-08-30
- Resource type
- Rights statement
Beziehungen
- In Collection:
Objekte
Artikel
Miniaturansicht | Titel | Sichtbarkeit | Embargo Release Date | Aktionen |
---|---|---|---|---|
thesis.pdf | Öffentlich | Herunterladen | ||
Thesis_Presentation_Published.pdf | Öffentlich | Herunterladen |
Permanent link to this page: https://digital.wpi.edu/show/3f462556g