Etd

Learning Modular Robotic Control via Reinforcement Learning using Attention based Global State Prediction

Pubblico Deposited

Contenuto scaricabile

open in viewer

Deep Reinforcement Learning (DRL) has shown remarkable success in the control of single-robot applications. The approach has seen impressive results when applied to multi-robot coordination, but it has some notable shortcomings to overcome. Even though it is becoming increasingly popular for real-world multi-robot autonomy, DRL struggles as the complexity of the control system being developed increases. In systems with a high number of agents and consequently Degrees-of-Freedom (DOF), the training process can be prohibitively time-consuming or fraught with issues that make it difficult to learn optimal behaviors. One of the primary issues that DRL is faced with in multi-robot systems is managing the simultaneous learning process where the inter-agent interactions provide inconsistent information to the model. We investigated Attention-based Global State Prediction (AGSP) which uses information from neighbors to form a belief over the outcome of all the agents in order to overcome this instability in the training process. AGSP is able to predict future states accurately even over a large number of agents using information communicated about the collective actions. We used AGSP in a decentralized modular locomotion task and empirically evaluated the emergent properties. We found that AGSP produces policies that exhibit superior stability and adaptability. This makes AGSP a useful tool for developing safe and consistent controllers with low rates of failure.

Creator
Contributori
Degree
Unit
Publisher
Identifier
  • etd-115045
Parola chiave
Advisor
Committee
Defense date
Year
  • 2023
Date created
  • 2023-12-12
Resource type
Source
  • etd-115045
Rights statement
Ultima modifica
  • 2024-01-25

Relazioni

In Collection:

Articoli

Elementi

Permanent link to this page: https://digital.wpi.edu/show/bc386p451