Knowledge- and ambiguity-aware robot learning from corrective and evaluative feedback

Carlos Celemin*, Jens Kober

*Corresponding author for this work

Research output: Contribution to journalArticleScientificpeer-review

1 Citation (Scopus)
8 Downloads (Pure)


In order to deploy robots that could be adapted by non-expert users, interactive imitation learning (IIL) methods must be flexible regarding the interaction preferences of the teacher and avoid assumptions of perfect teachers (oracles), while considering they make mistakes influenced by diverse human factors. In this work, we propose an IIL method that improves the human–robot interaction for non-expert and imperfect teachers in two directions. First, uncertainty estimation is included to endow the agents with a lack of knowledge awareness (epistemic uncertainty) and demonstration ambiguity awareness (aleatoric uncertainty), such that the robot can request human input when it is deemed more necessary. Second, the proposed method enables the teachers to train with the flexibility of using corrective demonstrations, evaluative reinforcements, and implicit positive feedback. The experimental results show an improvement in learning convergence with respect to other learning methods when the agent learns from highly ambiguous teachers. Additionally, in a user study, it was found that the components of the proposed method improve the teaching experience and the data efficiency of the learning process.

Original languageEnglish
Pages (from-to)16821-16839
JournalNeural Computing and Applications
Issue number23
Publication statusPublished - 2023


  • Active learning
  • Corrective demonstrations
  • Human reinforcement
  • Interactive imitation learning
  • Uncertainty


Dive into the research topics of 'Knowledge- and ambiguity-aware robot learning from corrective and evaluative feedback'. Together they form a unique fingerprint.

Cite this