Social agents and robots will require both learning and emotional capabilities to successfully enter society. This paper connects both challenges, by studying models of emotion generation in sequential decision-making agents. Previous work in this field has focussed on model-free reinforcement learning (RL). However, important emotions like hope and fear need anticipation, which requires a model and forward simulation. Taking inspiration from the psychological Belief-Desire Theory of Emotions (BDTE), our work specifies models of hope and fear based on best and worst forward traces. To efficiently estimate these traces, we integrate a well-known Monte Carlo Tree Search procedure (UCT) into a model based RL architecture. Test results in three known RL domains illustrate emotion dynamics, dependencies on policy and environmental stochasticity, and plausibility in individual Pacman game settings. Our models enable agents to naturally elicit hope and fear during learning, and moreover, explain what anticipated event caused this.
|Title of host publication||IJCAI International Joint Conference on Artificial Intelligence|
|Number of pages||7|
|Publication status||Published - 2016|
|Event||IJCAI 2016: 25th International Joint Conference on Artificial Intelligence - New York, United States|
Duration: 9 Jul 2016 → 15 Jul 2016
|Abbreviated title||IJCAI 2016|
|Period||9/07/16 → 15/07/16|