Leveraging Factored State Representations for Enhanced Efficiency in Reinforcement Learning

Research output: ThesisDissertation (TU Delft)

66 Downloads (Pure)

Abstract

Reinforcement learning techniques have demonstrated great promise in tackling sequential decision-making problems. However, the inherent complexity of real-world scenarios presents significant challenges for its application. This thesis takes a fresh approach that explores the untapped potential of factored state representations as a means to enhance the efficiency of reinforcement learning.

Factored representations involve variables describing various features of the environment. These variables, along with their possible values, define the agent’s states. Unlike standard representations, factored representations provide a unique perspective that enables us to gain deeper insights into the underlying structure of the environment and refine our understanding of the problem at hand.

By analyzing variable dependencies, we can abstract simplified representations of the environment states and construct computationally lightweight models. To do so, we will explore potential factorizations of key functions governing the reinforcement learning problem, such as transitions, rewards, policies, or value functions. These factorizations can be achieved by exploiting variable redundancies and leveraging relations of conditional independence.

This thesis proposes a set of methods that are shown to improve the efficiency and scalability of reinforcement learning in complex scenarios. We hope that the findings of this research contribute to showcasing the potential of factored representations and serve as inspiration for future research in this direction.
Original languageEnglish
QualificationDoctor of Philosophy
Awarding Institution
  • Delft University of Technology
Supervisors/Advisors
  • Oliehoek, F.A., Supervisor
  • Spaan, M.T.J., Supervisor
Award date19 Jan 2024
DOIs
Publication statusPublished - 2024

Fingerprint

Dive into the research topics of 'Leveraging Factored State Representations for Enhanced Efficiency in Reinforcement Learning'. Together they form a unique fingerprint.

Cite this