Fear and hope emerge from anticipation in model-based reinforcement learning

Research output: Chapter in Book/Conference proceedings/Edited volumeConference contributionScientificpeer-review

8 Citations (Scopus)


Social agents and robots will require both learning and emotional capabilities to successfully enter society. This paper connects both challenges, by studying models of emotion generation in sequential decision-making agents. Previous work in this field has focussed on model-free reinforcement learning (RL). However, important emotions like hope and fear need anticipation, which requires a model and forward simulation. Taking inspiration from the psychological Belief-Desire Theory of Emotions (BDTE), our work specifies models of hope and fear based on best and worst forward traces. To efficiently estimate these traces, we integrate a well-known Monte Carlo Tree Search procedure (UCT) into a model based RL architecture. Test results in three known RL domains illustrate emotion dynamics, dependencies on policy and environmental stochasticity, and plausibility in individual Pacman game settings. Our models enable agents to naturally elicit hope and fear during learning, and moreover, explain what anticipated event caused this.

Original languageEnglish
Title of host publicationIJCAI International Joint Conference on Artificial Intelligence
Number of pages7
Publication statusPublished - 2016
EventIJCAI 2016: 25th International Joint Conference on Artificial Intelligence - New York, United States
Duration: 9 Jul 201615 Jul 2016


ConferenceIJCAI 2016
Abbreviated titleIJCAI 2016
CountryUnited States
CityNew York
Internet address


Dive into the research topics of 'Fear and hope emerge from anticipation in model-based reinforcement learning'. Together they form a unique fingerprint.

Cite this