A Hierarchical Maze Navigation Algorithm with Reinforcement Learning and Mapping

Tommaso Mannucci, Erik-Jan van Kampen

Research output: Chapter in Book/Conference proceedings/Edited volumeConference contributionScientificpeer-review

4 Citations (Scopus)
56 Downloads (Pure)

Abstract

Goal-finding in an unknown maze is a challenging problem for a Reinforcement Learning agent, because the corresponding state space can be large if not intractable, and the agent does not usually have a model of the environment. Hierarchical Reinforcement Learning has been shown in the past to improve tractability and learning time of complex problems, as well as facilitate learning a coherent transition model for the environment. Nonetheless, considerable time is still needed to learn the transition model, so that initially the agent can perform poorly by getting trapped into dead ends and colliding with obstacles. This paper proposes a strategy for maze exploration that, by means of sequential tasking and off-line training on an abstract environment, provides the agent with a minimal level of performance from the very beginning of exploration. In particular, this approach allows to prevent collisions with obstacles, thus enforcing a safety restraint on the agent.
Original languageEnglish
Title of host publication2016 IEEE Symposium Series on Computational Intelligence
Subtitle of host publicationAthens, Greece
EditorsY Jin, S. Kollias
PublisherIEEE
Number of pages8
DOIs
Publication statusE-pub ahead of print - 2016
Event2016 IEEE Symposium Series on Computational Intelligence - Athens, Greece
Duration: 6 Oct 20169 Oct 2016
http://ssci2016.cs.surrey.ac.uk/

Conference

Conference2016 IEEE Symposium Series on Computational Intelligence
Abbreviated titleSSCI 2016
CountryGreece
CityAthens
Period6/10/169/10/16
Internet address

Fingerprint

Dive into the research topics of 'A Hierarchical Maze Navigation Algorithm with Reinforcement Learning and Mapping'. Together they form a unique fingerprint.

Cite this